var/home/core/zuul-output/0000755000175000017500000000000015117020511014516 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015117024713015472 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000001607334215117024704017704 0ustar rootrootDec 12 14:10:17 crc systemd[1]: Starting Kubernetes Kubelet... Dec 12 14:10:17 crc kubenswrapper[5113]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 14:10:17 crc kubenswrapper[5113]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 12 14:10:17 crc kubenswrapper[5113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 14:10:17 crc kubenswrapper[5113]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 14:10:17 crc kubenswrapper[5113]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 14:10:17 crc kubenswrapper[5113]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.281541 5113 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284190 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284219 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284224 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284228 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284233 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284239 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284244 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284249 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284253 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284257 5113 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284261 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284265 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284269 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284273 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284277 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284281 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284285 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284289 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284295 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284299 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284303 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284307 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284311 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284315 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284319 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284323 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284326 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284331 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284335 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284339 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284342 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284349 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284354 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284367 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284379 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284384 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284388 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284392 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284396 5113 feature_gate.go:328] unrecognized feature gate: Example Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284402 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284406 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284410 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284414 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284419 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284424 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284431 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284435 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284439 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284444 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284447 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284451 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284455 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284459 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284462 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284466 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284470 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284474 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284478 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284483 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284486 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284490 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284495 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284499 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284503 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284506 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284511 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284521 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284534 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284538 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284543 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284548 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284555 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284560 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284564 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284568 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284571 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284575 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284579 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284582 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284586 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284590 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284594 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284597 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284601 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284605 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.284608 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285285 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285296 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285300 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285304 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285308 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285312 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285317 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285321 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285325 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285328 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285332 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285337 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285349 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285355 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285359 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285363 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285366 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285370 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285373 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285377 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285380 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285383 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285387 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285390 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285393 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285397 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285400 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285404 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285407 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285410 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285413 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285417 5113 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285420 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285423 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285426 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285430 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285433 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285436 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285440 5113 feature_gate.go:328] unrecognized feature gate: Example Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285443 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285446 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285450 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285453 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285457 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285461 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285472 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285476 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285479 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285483 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285486 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285490 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285494 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285498 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285502 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285505 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285508 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285512 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285515 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285518 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285522 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285526 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285530 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285533 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285536 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285539 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285543 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285546 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285549 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285552 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285555 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285559 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285562 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285565 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285569 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285572 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285576 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285579 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285582 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285592 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285596 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285600 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285603 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285607 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285610 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285613 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.285617 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286022 5113 flags.go:64] FLAG: --address="0.0.0.0" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286040 5113 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286055 5113 flags.go:64] FLAG: --anonymous-auth="true" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286061 5113 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286067 5113 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286071 5113 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286076 5113 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286082 5113 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286086 5113 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286090 5113 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286095 5113 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286099 5113 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286104 5113 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286107 5113 flags.go:64] FLAG: --cgroup-root="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286111 5113 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286115 5113 flags.go:64] FLAG: --client-ca-file="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286122 5113 flags.go:64] FLAG: --cloud-config="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286126 5113 flags.go:64] FLAG: --cloud-provider="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286130 5113 flags.go:64] FLAG: --cluster-dns="[]" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286155 5113 flags.go:64] FLAG: --cluster-domain="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286158 5113 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286163 5113 flags.go:64] FLAG: --config-dir="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286167 5113 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286171 5113 flags.go:64] FLAG: --container-log-max-files="5" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286176 5113 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286187 5113 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286192 5113 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286203 5113 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286207 5113 flags.go:64] FLAG: --contention-profiling="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286211 5113 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286215 5113 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286219 5113 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286224 5113 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286235 5113 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286242 5113 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286247 5113 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286252 5113 flags.go:64] FLAG: --enable-load-reader="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286257 5113 flags.go:64] FLAG: --enable-server="true" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286261 5113 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286275 5113 flags.go:64] FLAG: --event-burst="100" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286279 5113 flags.go:64] FLAG: --event-qps="50" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286282 5113 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286286 5113 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286290 5113 flags.go:64] FLAG: --eviction-hard="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286295 5113 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286299 5113 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286303 5113 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286307 5113 flags.go:64] FLAG: --eviction-soft="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286311 5113 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286315 5113 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286318 5113 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286322 5113 flags.go:64] FLAG: --experimental-mounter-path="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286326 5113 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286330 5113 flags.go:64] FLAG: --fail-swap-on="true" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286334 5113 flags.go:64] FLAG: --feature-gates="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286339 5113 flags.go:64] FLAG: --file-check-frequency="20s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286343 5113 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286347 5113 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286358 5113 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286366 5113 flags.go:64] FLAG: --healthz-port="10248" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286370 5113 flags.go:64] FLAG: --help="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286374 5113 flags.go:64] FLAG: --hostname-override="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286377 5113 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286382 5113 flags.go:64] FLAG: --http-check-frequency="20s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286390 5113 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286394 5113 flags.go:64] FLAG: --image-credential-provider-config="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286398 5113 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286402 5113 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286406 5113 flags.go:64] FLAG: --image-service-endpoint="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286411 5113 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286416 5113 flags.go:64] FLAG: --kube-api-burst="100" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286421 5113 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286426 5113 flags.go:64] FLAG: --kube-api-qps="50" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286431 5113 flags.go:64] FLAG: --kube-reserved="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286440 5113 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286450 5113 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286456 5113 flags.go:64] FLAG: --kubelet-cgroups="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286462 5113 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286467 5113 flags.go:64] FLAG: --lock-file="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286472 5113 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286476 5113 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286481 5113 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286490 5113 flags.go:64] FLAG: --log-json-split-stream="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286495 5113 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286499 5113 flags.go:64] FLAG: --log-text-split-stream="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286508 5113 flags.go:64] FLAG: --logging-format="text" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286513 5113 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286519 5113 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286524 5113 flags.go:64] FLAG: --manifest-url="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286528 5113 flags.go:64] FLAG: --manifest-url-header="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286544 5113 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286560 5113 flags.go:64] FLAG: --max-open-files="1000000" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286569 5113 flags.go:64] FLAG: --max-pods="110" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286573 5113 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286578 5113 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286584 5113 flags.go:64] FLAG: --memory-manager-policy="None" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286588 5113 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286593 5113 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286597 5113 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286601 5113 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286612 5113 flags.go:64] FLAG: --node-status-max-images="50" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286616 5113 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286620 5113 flags.go:64] FLAG: --oom-score-adj="-999" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286624 5113 flags.go:64] FLAG: --pod-cidr="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286627 5113 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286635 5113 flags.go:64] FLAG: --pod-manifest-path="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286639 5113 flags.go:64] FLAG: --pod-max-pids="-1" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286643 5113 flags.go:64] FLAG: --pods-per-core="0" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286646 5113 flags.go:64] FLAG: --port="10250" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286650 5113 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286654 5113 flags.go:64] FLAG: --provider-id="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286657 5113 flags.go:64] FLAG: --qos-reserved="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286661 5113 flags.go:64] FLAG: --read-only-port="10255" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286670 5113 flags.go:64] FLAG: --register-node="true" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286717 5113 flags.go:64] FLAG: --register-schedulable="true" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286722 5113 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286731 5113 flags.go:64] FLAG: --registry-burst="10" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286739 5113 flags.go:64] FLAG: --registry-qps="5" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286743 5113 flags.go:64] FLAG: --reserved-cpus="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286747 5113 flags.go:64] FLAG: --reserved-memory="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286752 5113 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286757 5113 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286761 5113 flags.go:64] FLAG: --rotate-certificates="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286766 5113 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286780 5113 flags.go:64] FLAG: --runonce="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286786 5113 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286790 5113 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286794 5113 flags.go:64] FLAG: --seccomp-default="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286798 5113 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286803 5113 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286807 5113 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286813 5113 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286817 5113 flags.go:64] FLAG: --storage-driver-password="root" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286821 5113 flags.go:64] FLAG: --storage-driver-secure="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286825 5113 flags.go:64] FLAG: --storage-driver-table="stats" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286830 5113 flags.go:64] FLAG: --storage-driver-user="root" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286834 5113 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286838 5113 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286843 5113 flags.go:64] FLAG: --system-cgroups="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286846 5113 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286854 5113 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286858 5113 flags.go:64] FLAG: --tls-cert-file="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286883 5113 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286894 5113 flags.go:64] FLAG: --tls-min-version="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286899 5113 flags.go:64] FLAG: --tls-private-key-file="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286904 5113 flags.go:64] FLAG: --topology-manager-policy="none" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286908 5113 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286913 5113 flags.go:64] FLAG: --topology-manager-scope="container" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286917 5113 flags.go:64] FLAG: --v="2" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286932 5113 flags.go:64] FLAG: --version="false" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286937 5113 flags.go:64] FLAG: --vmodule="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286944 5113 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.286948 5113 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287097 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287104 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287108 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287116 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287155 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287162 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287166 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287170 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287174 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287179 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287183 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287187 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287191 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287195 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287198 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287203 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287224 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287230 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287235 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287239 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287244 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287248 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287253 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287257 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287261 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287266 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287270 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287274 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287281 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287285 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287289 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287293 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287299 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287304 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287309 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287316 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287320 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287337 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287343 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287347 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287352 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287358 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287364 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287368 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287372 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287376 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287382 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287387 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287391 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287395 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287399 5113 feature_gate.go:328] unrecognized feature gate: Example Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287404 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287408 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287412 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287416 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287420 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287425 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287429 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287434 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287437 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287444 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287458 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287464 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287468 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287471 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287475 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287480 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287487 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287491 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287494 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287509 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287514 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287518 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287522 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287526 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287530 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287535 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287539 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287544 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287548 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287552 5113 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287556 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287560 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287565 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287570 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.287574 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.287752 5113 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.297162 5113 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.297234 5113 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297306 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297315 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297319 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297323 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297328 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297332 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297335 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297339 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297343 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297347 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297351 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297355 5113 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297359 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297364 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297369 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297373 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297378 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297383 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297388 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297392 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297397 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297403 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297412 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297416 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297420 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297424 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297428 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297431 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297437 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297441 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297445 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297449 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297453 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297456 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297460 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297463 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297467 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297470 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297474 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297477 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297481 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297485 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297488 5113 feature_gate.go:328] unrecognized feature gate: Example Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297492 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297496 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297499 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297503 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297508 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297513 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297517 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297520 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297525 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297528 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297532 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297535 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297538 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297542 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297546 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297549 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297552 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297555 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297560 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297563 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297566 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297570 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297573 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297577 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297580 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297584 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297587 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297590 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297594 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297597 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297601 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297604 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297607 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297611 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297614 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297618 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297621 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297625 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297629 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297632 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297637 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297640 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297644 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.297652 5113 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297775 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297783 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297787 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297791 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297794 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297798 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297801 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297805 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297808 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297812 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297816 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297819 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297823 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297826 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297830 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297836 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297840 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297843 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297847 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297850 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297853 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297857 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297860 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297864 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297867 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297871 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297877 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297881 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297888 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297893 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297898 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297902 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297907 5113 feature_gate.go:328] unrecognized feature gate: Example Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297911 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297915 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297919 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297923 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297927 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297932 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.297936 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298250 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298257 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298260 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298264 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298267 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298273 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298276 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298280 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298283 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298286 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298290 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298293 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298297 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298300 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298305 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298308 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298311 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298315 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298318 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298322 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298325 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298331 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298335 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298339 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298342 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298346 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298349 5113 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298352 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298356 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298359 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298363 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298366 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298370 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298374 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298377 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298380 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298384 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298388 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298391 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298395 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298398 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298401 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298404 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298408 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298411 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 14:10:17 crc kubenswrapper[5113]: W1212 14:10:17.298415 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.298422 5113 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.299263 5113 server.go:962] "Client rotation is on, will bootstrap in background" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.304995 5113 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.308244 5113 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.308360 5113 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.309088 5113 server.go:1019] "Starting client certificate rotation" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.309336 5113 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.309414 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.316638 5113 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.318028 5113 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.318716 5113 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.327861 5113 log.go:25] "Validated CRI v1 runtime API" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.348166 5113 log.go:25] "Validated CRI v1 image API" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.350541 5113 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.354368 5113 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-12-14-04-15-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.354471 5113 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.374573 5113 manager.go:217] Machine: {Timestamp:2025-12-12 14:10:17.372762848 +0000 UTC m=+0.208012685 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649922048 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:c314c9b7-f73b-4b1a-9a4f-2e5666868333 BootID:c1dc7520-c661-426a-968f-51bfb5017ad6 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107656 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824958976 Type:vfs Inodes:4107656 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:c8:a5:27 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:c8:a5:27 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:06:20:d7 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:32:40:42 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c9:28:6d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:11:33:c5 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:72:6e:0c:7c:41:ee Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ba:50:db:df:17:06 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649922048 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.374890 5113 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.375077 5113 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.376288 5113 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.376337 5113 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.376597 5113 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.376610 5113 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.376635 5113 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.376869 5113 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.377356 5113 state_mem.go:36] "Initialized new in-memory state store" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.377650 5113 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.378491 5113 kubelet.go:491] "Attempting to sync node with API server" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.378533 5113 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.378585 5113 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.378619 5113 kubelet.go:397] "Adding apiserver pod source" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.378670 5113 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.380092 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.380320 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.381924 5113 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.381984 5113 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.383417 5113 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.383443 5113 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.385064 5113 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.385472 5113 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.385955 5113 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386397 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386421 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386430 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386443 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386452 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386460 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386468 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386497 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386507 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386520 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386547 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386665 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386849 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.386863 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.388179 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.399369 5113 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.399484 5113 server.go:1295] "Started kubelet" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.399807 5113 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.399957 5113 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.400171 5113 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.401464 5113 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 14:10:17 crc systemd[1]: Started Kubernetes Kubelet. Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.401568 5113 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.401944 5113 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.402614 5113 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.402603 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="200ms" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.402744 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.402784 5113 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.402843 5113 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.403036 5113 server.go:317] "Adding debug handlers to kubelet server" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.403309 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.407148 5113 factory.go:55] Registering systemd factory Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.407250 5113 factory.go:223] Registration of the systemd container factory successfully Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.412865 5113 factory.go:153] Registering CRI-O factory Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.412905 5113 factory.go:223] Registration of the crio container factory successfully Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.413012 5113 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.413043 5113 factory.go:103] Registering Raw factory Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.413061 5113 manager.go:1196] Started watching for new ooms in manager Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.413252 5113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18807d1cfdc64e0a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.399422474 +0000 UTC m=+0.234672301,LastTimestamp:2025-12-12 14:10:17.399422474 +0000 UTC m=+0.234672301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.413845 5113 manager.go:319] Starting recovery of all containers Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445591 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445648 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445661 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445675 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445689 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445700 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445712 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445724 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445738 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445771 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445781 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445794 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445804 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445815 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445830 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445870 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445933 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445944 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445955 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445967 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.445977 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446035 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446050 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446078 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446090 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446103 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446112 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446124 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446153 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446162 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446228 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446274 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446284 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446290 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446298 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446307 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446314 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446322 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446329 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446352 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446374 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446382 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446392 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446401 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446410 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446433 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446443 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446466 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446487 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446496 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446505 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446539 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446548 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446557 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446566 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446587 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446602 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446611 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446620 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446677 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446686 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446732 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446746 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446802 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446816 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446827 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446838 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446847 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446858 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446866 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446874 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446909 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446934 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446942 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446951 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446964 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446974 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446982 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.446990 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447017 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447057 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447067 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447075 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447084 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447092 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447100 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447109 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447207 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447217 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447248 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447259 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447313 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447325 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447335 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447343 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447375 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447384 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447424 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447432 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447440 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447447 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447455 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447464 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447489 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447514 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447538 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447546 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447554 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447563 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447571 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447580 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447605 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447640 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447648 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447656 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447663 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447671 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447680 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447688 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447714 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447728 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447770 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447780 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447789 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447876 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447886 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447893 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447921 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447929 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447936 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447962 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447976 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447986 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.447993 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448001 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448221 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448245 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448269 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448277 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448290 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448300 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448307 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448315 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448342 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448371 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448382 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448393 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448409 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448425 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448434 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448442 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448472 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448481 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448513 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448521 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448529 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448540 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448590 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448605 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448634 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448643 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448651 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448659 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448695 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448705 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448715 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448752 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448781 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448802 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448844 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448852 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448860 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448868 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448913 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448922 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448943 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448964 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448977 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448985 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.448993 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.449001 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.449008 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.449018 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.449044 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.449053 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.449077 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.449087 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.449153 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.449169 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.449179 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.449231 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.449296 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.449312 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.450982 5113 manager.go:324] Recovery completed Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.451881 5113 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452007 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452054 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452070 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452080 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452091 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452101 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452152 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452177 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452186 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452205 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452218 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452230 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452242 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452253 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452281 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452307 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452359 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452410 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452422 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452438 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452449 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452460 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452471 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452510 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452522 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452532 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452561 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452576 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452586 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452622 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452635 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452809 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452831 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452842 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452853 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452864 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452873 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452883 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452893 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452914 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452923 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452933 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452987 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.452999 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.453007 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.453017 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.453046 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.453070 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.453094 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.453103 5113 reconstruct.go:97] "Volume reconstruction finished" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.453109 5113 reconciler.go:26] "Reconciler: start to sync state" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.474425 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.477351 5113 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.477844 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.477991 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.478013 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.479875 5113 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.480590 5113 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.480686 5113 state_mem.go:36] "Initialized new in-memory state store" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.481407 5113 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.481475 5113 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.481521 5113 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.481535 5113 kubelet.go:2451] "Starting kubelet main sync loop" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.481599 5113 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.483681 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.488556 5113 policy_none.go:49] "None policy: Start" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.488712 5113 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.488795 5113 state_mem.go:35] "Initializing new in-memory state store" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.504173 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.542910 5113 manager.go:341] "Starting Device Plugin manager" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.543000 5113 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.543018 5113 server.go:85] "Starting device plugin registration server" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.543655 5113 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.543677 5113 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.543880 5113 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.544008 5113 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.544024 5113 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.582961 5113 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.583325 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.584609 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.584665 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.584682 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.585644 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.586021 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.586112 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.586419 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.586452 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.586462 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.587033 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.587101 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.587111 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.587141 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.587441 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.587514 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.588294 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.588339 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.588353 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.588296 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.588437 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.588452 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.589498 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.589623 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.589672 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.590224 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.590274 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.590295 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.590317 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.590344 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.590356 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.591913 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.591934 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.592107 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.592803 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.592841 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.592855 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.593120 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.593165 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.593180 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.593799 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.593840 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.594797 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.594840 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.594852 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.603306 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="400ms" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.635588 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.644911 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.646646 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.646714 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.646727 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.646762 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.647595 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.656416 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.658664 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.658836 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.658894 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.658929 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.658964 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.658987 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.659208 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.659189 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.659700 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.659761 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.659786 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.659807 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.659826 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.659846 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.659919 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.659989 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.660024 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.660055 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.660084 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.660145 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.660194 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.660220 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.660241 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.660266 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.660267 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.660266 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.660290 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.660703 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.660778 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.661198 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.662522 5113 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.662772 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.682800 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.705918 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.713637 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.762773 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.762847 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.762879 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.762897 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.762922 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.762935 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.762999 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763030 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763030 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.762949 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763142 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763206 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763246 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763251 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763313 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763324 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763324 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763346 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763424 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763449 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763457 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763481 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763487 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763549 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763572 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763595 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763603 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763671 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763684 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763711 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763625 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.763740 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.847740 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.849476 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.849531 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.849543 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.849577 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:17 crc kubenswrapper[5113]: E1212 14:10:17.850194 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.957246 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.958466 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:17 crc kubenswrapper[5113]: I1212 14:10:17.984280 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 14:10:18 crc kubenswrapper[5113]: E1212 14:10:18.005265 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="800ms" Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.007385 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.014784 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:18 crc kubenswrapper[5113]: W1212 14:10:18.069474 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-400b14e89c49e8d0022bb77937d1f4fd52e02fa214d4f50f1894d937d9f31fe2 WatchSource:0}: Error finding container 400b14e89c49e8d0022bb77937d1f4fd52e02fa214d4f50f1894d937d9f31fe2: Status 404 returned error can't find the container with id 400b14e89c49e8d0022bb77937d1f4fd52e02fa214d4f50f1894d937d9f31fe2 Dec 12 14:10:18 crc kubenswrapper[5113]: W1212 14:10:18.078532 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-c2a4730307e063fed4c82f22546044ceaa88d565557ef0dd2da2b7f22c9ec4c5 WatchSource:0}: Error finding container c2a4730307e063fed4c82f22546044ceaa88d565557ef0dd2da2b7f22c9ec4c5: Status 404 returned error can't find the container with id c2a4730307e063fed4c82f22546044ceaa88d565557ef0dd2da2b7f22c9ec4c5 Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.083342 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 14:10:18 crc kubenswrapper[5113]: W1212 14:10:18.083809 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-11871bea8e0b150c8e57e14380c3eb29bf010ad08e9af116716cd8ede9bc965a WatchSource:0}: Error finding container 11871bea8e0b150c8e57e14380c3eb29bf010ad08e9af116716cd8ede9bc965a: Status 404 returned error can't find the container with id 11871bea8e0b150c8e57e14380c3eb29bf010ad08e9af116716cd8ede9bc965a Dec 12 14:10:18 crc kubenswrapper[5113]: W1212 14:10:18.090035 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-a7c19e8fa4c43873463a3ffd87112305c83d1d61aa5d53e7f696802d6c044a06 WatchSource:0}: Error finding container a7c19e8fa4c43873463a3ffd87112305c83d1d61aa5d53e7f696802d6c044a06: Status 404 returned error can't find the container with id a7c19e8fa4c43873463a3ffd87112305c83d1d61aa5d53e7f696802d6c044a06 Dec 12 14:10:18 crc kubenswrapper[5113]: W1212 14:10:18.092689 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-89eebc76c7bbda6871c909b026e654e2c16396a455f927f64df5b81b2716cf16 WatchSource:0}: Error finding container 89eebc76c7bbda6871c909b026e654e2c16396a455f927f64df5b81b2716cf16: Status 404 returned error can't find the container with id 89eebc76c7bbda6871c909b026e654e2c16396a455f927f64df5b81b2716cf16 Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.250938 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.253031 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.253119 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.253163 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.253199 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:18 crc kubenswrapper[5113]: E1212 14:10:18.253908 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.389805 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.492184 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"89eebc76c7bbda6871c909b026e654e2c16396a455f927f64df5b81b2716cf16"} Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.495303 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"a7c19e8fa4c43873463a3ffd87112305c83d1d61aa5d53e7f696802d6c044a06"} Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.496415 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"11871bea8e0b150c8e57e14380c3eb29bf010ad08e9af116716cd8ede9bc965a"} Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.497634 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"c2a4730307e063fed4c82f22546044ceaa88d565557ef0dd2da2b7f22c9ec4c5"} Dec 12 14:10:18 crc kubenswrapper[5113]: I1212 14:10:18.498678 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"400b14e89c49e8d0022bb77937d1f4fd52e02fa214d4f50f1894d937d9f31fe2"} Dec 12 14:10:18 crc kubenswrapper[5113]: E1212 14:10:18.789114 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 14:10:18 crc kubenswrapper[5113]: E1212 14:10:18.806663 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="1.6s" Dec 12 14:10:18 crc kubenswrapper[5113]: E1212 14:10:18.850580 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 14:10:18 crc kubenswrapper[5113]: E1212 14:10:18.910237 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.054366 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.055829 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.055892 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.055906 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.055948 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:19 crc kubenswrapper[5113]: E1212 14:10:19.056579 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Dec 12 14:10:19 crc kubenswrapper[5113]: E1212 14:10:19.075564 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.389413 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.446457 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 14:10:19 crc kubenswrapper[5113]: E1212 14:10:19.447705 5113 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.504218 5113 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="bef0de3fac9d6c6f6bbedd63d19310499aa7c56d41cc410fdc1050382979f88e" exitCode=0 Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.504272 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"bef0de3fac9d6c6f6bbedd63d19310499aa7c56d41cc410fdc1050382979f88e"} Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.504592 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.505728 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.505778 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.505794 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:19 crc kubenswrapper[5113]: E1212 14:10:19.506031 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.506797 5113 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="27397e5d42e374524ba5bca68838bc5096d77e381799deae93d4d7a10f32d279" exitCode=0 Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.506937 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.507087 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"27397e5d42e374524ba5bca68838bc5096d77e381799deae93d4d7a10f32d279"} Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.507436 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.507466 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.507479 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:19 crc kubenswrapper[5113]: E1212 14:10:19.507683 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.511104 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ccb7f9b929e828dd8220bfa92ffca05f4a2a78a97f17a86787dcf729ff4feafc" exitCode=0 Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.511204 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"ccb7f9b929e828dd8220bfa92ffca05f4a2a78a97f17a86787dcf729ff4feafc"} Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.511326 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.512444 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.512477 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.512489 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:19 crc kubenswrapper[5113]: E1212 14:10:19.512686 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.514649 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.515526 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.515560 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.515574 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.515588 5113 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="2289409f704de09fd5046d240291927dcfd1d0bd2e405d7a5c774b5d3f60be7c" exitCode=0 Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.515678 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"2289409f704de09fd5046d240291927dcfd1d0bd2e405d7a5c774b5d3f60be7c"} Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.516886 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.517658 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.517691 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.517706 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:19 crc kubenswrapper[5113]: E1212 14:10:19.518062 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.524458 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7c5dd29b60ac54434d6905e5e2060984558d4ebd1c0dbeb2fc26de7d4edb2350"} Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.524516 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"690e612679d6f018155dde739043394344f531477646d0b549357e8be58c5e65"} Dec 12 14:10:19 crc kubenswrapper[5113]: I1212 14:10:19.524527 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"8a3faa9196622f095cf411cc0eda711dbe60bf2d7c1dd872fb6909902318f8ec"} Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.528967 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b580f18ad4f07a1213ec639cdb9df787c5ef723b26eded55ee758cb6f9f62cb9"} Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.529058 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"206f507a6fdf88d3e1d29676b4a01b01c2d876ce2806953af724790345c9e763"} Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.529078 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ae59a492a24b7e319fb6e2535bd395840e015bc16f625d44a75bc0d8b996b8e6"} Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.530302 5113 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="358a32c664caec74b7afc5ba497da5182c9ad7295f248c6622b51655b89b3d35" exitCode=0 Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.530382 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"358a32c664caec74b7afc5ba497da5182c9ad7295f248c6622b51655b89b3d35"} Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.530457 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.530921 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.530954 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.530965 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:20 crc kubenswrapper[5113]: E1212 14:10:20.531251 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.534533 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"086595c87c1781f7167b3b7e320cd2e025c9b0b2eeb0349797a53b94d5ecf160"} Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.534652 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.535266 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.535297 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.535308 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:20 crc kubenswrapper[5113]: E1212 14:10:20.535584 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.536165 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"bbf84d3d456666ce03f4248e30f60ecaca28ffb59032e960ed930a9f53e50549"} Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.536270 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.541334 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.541378 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.541403 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:20 crc kubenswrapper[5113]: E1212 14:10:20.541597 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.543697 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"00c5bb0f8e7e4ac1050355875ddd3cd32a5385c070d2a32cec02b7f9bf40ccf4"} Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.543729 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"abe78f4493f4d871e29656306a9958d27363fcc43f0a820d66ecbeaf45220e27"} Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.543741 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"31de24bc1eb798fac3083251b82b21b3ddaa03b62cf389cc751edc8edbacad37"} Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.543871 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.544444 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.544465 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.544473 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:20 crc kubenswrapper[5113]: E1212 14:10:20.544637 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.656684 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.663908 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.664370 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.664409 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:20 crc kubenswrapper[5113]: I1212 14:10:20.664476 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.550368 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"794652cedc86dbd83b8c1657c21d6c5b7936316e2fd7e1da60d63d9b4d567ce5"} Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.550452 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4dcb1f921c832112e5a3717359d76d330218be7e53f95c41d75b5738ce073c00"} Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.550707 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.551600 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.551689 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.551770 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:21 crc kubenswrapper[5113]: E1212 14:10:21.552295 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.553780 5113 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="2fc64ca267a6f3f76f661088ad8bfeac5b5b49f861206788e60f537c78a07cca" exitCode=0 Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.553854 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"2fc64ca267a6f3f76f661088ad8bfeac5b5b49f861206788e60f537c78a07cca"} Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.553963 5113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.554177 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.554190 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.554050 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.554345 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.554911 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.554952 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.554966 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:21 crc kubenswrapper[5113]: E1212 14:10:21.555275 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.555721 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.555743 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.555754 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.555876 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.555936 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.555966 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:21 crc kubenswrapper[5113]: E1212 14:10:21.556064 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.556447 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.556502 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:21 crc kubenswrapper[5113]: I1212 14:10:21.556518 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:21 crc kubenswrapper[5113]: E1212 14:10:21.556685 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:21 crc kubenswrapper[5113]: E1212 14:10:21.557147 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:22 crc kubenswrapper[5113]: I1212 14:10:22.429602 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:22 crc kubenswrapper[5113]: I1212 14:10:22.558773 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"cc48af6a76044e7783015f4b054d5951ab72fd2e2ce7bd8fa050ec3b16fd7ef4"} Dec 12 14:10:22 crc kubenswrapper[5113]: I1212 14:10:22.558839 5113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 14:10:22 crc kubenswrapper[5113]: I1212 14:10:22.558906 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:22 crc kubenswrapper[5113]: I1212 14:10:22.559486 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:22 crc kubenswrapper[5113]: I1212 14:10:22.559523 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:22 crc kubenswrapper[5113]: I1212 14:10:22.559535 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:22 crc kubenswrapper[5113]: E1212 14:10:22.559934 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:22 crc kubenswrapper[5113]: I1212 14:10:22.588745 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:22 crc kubenswrapper[5113]: I1212 14:10:22.878998 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:22 crc kubenswrapper[5113]: I1212 14:10:22.879323 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:22 crc kubenswrapper[5113]: I1212 14:10:22.880195 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:22 crc kubenswrapper[5113]: I1212 14:10:22.880240 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:22 crc kubenswrapper[5113]: I1212 14:10:22.880255 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:22 crc kubenswrapper[5113]: E1212 14:10:22.880590 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.016824 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.302328 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.302586 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.303550 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.303612 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.303628 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:23 crc kubenswrapper[5113]: E1212 14:10:23.304074 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.567240 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"8c822d500a3b37a2d55f9c587c4f369ef9bcb10fe8265a360b7c48548574da69"} Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.567309 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b8bf923968a6c879c13907d07b73c554d316b86acb88d866e3fa0a772a4fee32"} Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.567329 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"0009599cb3f5dd74008214ffbbaaecd1838b243f8a8299a307f3d5114800365b"} Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.567342 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"97a86d5b85dcf2115999c68636a544f2f2d3c85ee20b461ed27b2e15640fa556"} Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.567503 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.567574 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.567611 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.568368 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.568406 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.568417 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:23 crc kubenswrapper[5113]: E1212 14:10:23.568719 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.568922 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.568956 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.568969 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.568999 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.569030 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.569041 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:23 crc kubenswrapper[5113]: E1212 14:10:23.569230 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:23 crc kubenswrapper[5113]: E1212 14:10:23.569442 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:23 crc kubenswrapper[5113]: I1212 14:10:23.607205 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 14:10:24 crc kubenswrapper[5113]: I1212 14:10:24.569971 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:24 crc kubenswrapper[5113]: I1212 14:10:24.569983 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:24 crc kubenswrapper[5113]: I1212 14:10:24.571272 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:24 crc kubenswrapper[5113]: I1212 14:10:24.571317 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:24 crc kubenswrapper[5113]: I1212 14:10:24.571272 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:24 crc kubenswrapper[5113]: I1212 14:10:24.571373 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:24 crc kubenswrapper[5113]: I1212 14:10:24.571333 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:24 crc kubenswrapper[5113]: I1212 14:10:24.571387 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:24 crc kubenswrapper[5113]: E1212 14:10:24.571850 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:24 crc kubenswrapper[5113]: E1212 14:10:24.575876 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:24 crc kubenswrapper[5113]: I1212 14:10:24.942156 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:25 crc kubenswrapper[5113]: I1212 14:10:25.172677 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 12 14:10:25 crc kubenswrapper[5113]: I1212 14:10:25.572751 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:25 crc kubenswrapper[5113]: I1212 14:10:25.572857 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:25 crc kubenswrapper[5113]: I1212 14:10:25.574059 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:25 crc kubenswrapper[5113]: I1212 14:10:25.574092 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:25 crc kubenswrapper[5113]: I1212 14:10:25.574103 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:25 crc kubenswrapper[5113]: I1212 14:10:25.574105 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:25 crc kubenswrapper[5113]: I1212 14:10:25.574148 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:25 crc kubenswrapper[5113]: I1212 14:10:25.574160 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:25 crc kubenswrapper[5113]: E1212 14:10:25.574455 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:25 crc kubenswrapper[5113]: E1212 14:10:25.574785 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:26 crc kubenswrapper[5113]: I1212 14:10:26.341686 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 12 14:10:26 crc kubenswrapper[5113]: I1212 14:10:26.575355 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:26 crc kubenswrapper[5113]: I1212 14:10:26.576331 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:26 crc kubenswrapper[5113]: I1212 14:10:26.576392 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:26 crc kubenswrapper[5113]: I1212 14:10:26.576407 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:26 crc kubenswrapper[5113]: E1212 14:10:26.577179 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:27 crc kubenswrapper[5113]: E1212 14:10:27.663693 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:10:29 crc kubenswrapper[5113]: I1212 14:10:29.437050 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:29 crc kubenswrapper[5113]: I1212 14:10:29.437365 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:29 crc kubenswrapper[5113]: I1212 14:10:29.438874 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:29 crc kubenswrapper[5113]: I1212 14:10:29.438905 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:29 crc kubenswrapper[5113]: I1212 14:10:29.438915 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:29 crc kubenswrapper[5113]: E1212 14:10:29.439202 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:29 crc kubenswrapper[5113]: I1212 14:10:29.442415 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:29 crc kubenswrapper[5113]: I1212 14:10:29.582914 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:29 crc kubenswrapper[5113]: I1212 14:10:29.583679 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:29 crc kubenswrapper[5113]: I1212 14:10:29.583718 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:29 crc kubenswrapper[5113]: I1212 14:10:29.583731 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:29 crc kubenswrapper[5113]: E1212 14:10:29.584065 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:29 crc kubenswrapper[5113]: I1212 14:10:29.587978 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:29 crc kubenswrapper[5113]: I1212 14:10:29.808903 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:30 crc kubenswrapper[5113]: I1212 14:10:30.391431 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 12 14:10:30 crc kubenswrapper[5113]: E1212 14:10:30.407995 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 12 14:10:30 crc kubenswrapper[5113]: I1212 14:10:30.501594 5113 trace.go:236] Trace[539770452]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 14:10:20.499) (total time: 10001ms): Dec 12 14:10:30 crc kubenswrapper[5113]: Trace[539770452]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:10:30.501) Dec 12 14:10:30 crc kubenswrapper[5113]: Trace[539770452]: [10.00159874s] [10.00159874s] END Dec 12 14:10:30 crc kubenswrapper[5113]: E1212 14:10:30.501645 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 14:10:30 crc kubenswrapper[5113]: I1212 14:10:30.585198 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:30 crc kubenswrapper[5113]: I1212 14:10:30.586013 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:30 crc kubenswrapper[5113]: I1212 14:10:30.586070 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:30 crc kubenswrapper[5113]: I1212 14:10:30.586085 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:30 crc kubenswrapper[5113]: E1212 14:10:30.586567 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:30 crc kubenswrapper[5113]: E1212 14:10:30.616396 5113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.18807d1cfdc64e0a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.399422474 +0000 UTC m=+0.234672301,LastTimestamp:2025-12-12 14:10:17.399422474 +0000 UTC m=+0.234672301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:30 crc kubenswrapper[5113]: E1212 14:10:30.671430 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 12 14:10:31 crc kubenswrapper[5113]: I1212 14:10:31.132100 5113 trace.go:236] Trace[1570641420]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 14:10:21.131) (total time: 10000ms): Dec 12 14:10:31 crc kubenswrapper[5113]: Trace[1570641420]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (14:10:31.132) Dec 12 14:10:31 crc kubenswrapper[5113]: Trace[1570641420]: [10.000881729s] [10.000881729s] END Dec 12 14:10:31 crc kubenswrapper[5113]: E1212 14:10:31.132169 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 14:10:31 crc kubenswrapper[5113]: I1212 14:10:31.588004 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:31 crc kubenswrapper[5113]: I1212 14:10:31.588738 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:31 crc kubenswrapper[5113]: I1212 14:10:31.588777 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:31 crc kubenswrapper[5113]: I1212 14:10:31.588789 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:31 crc kubenswrapper[5113]: E1212 14:10:31.589092 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:31 crc kubenswrapper[5113]: I1212 14:10:31.656731 5113 trace.go:236] Trace[917972434]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 14:10:21.654) (total time: 10002ms): Dec 12 14:10:31 crc kubenswrapper[5113]: Trace[917972434]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:10:31.656) Dec 12 14:10:31 crc kubenswrapper[5113]: Trace[917972434]: [10.00204051s] [10.00204051s] END Dec 12 14:10:31 crc kubenswrapper[5113]: E1212 14:10:31.656791 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 14:10:31 crc kubenswrapper[5113]: I1212 14:10:31.878091 5113 trace.go:236] Trace[645346060]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 14:10:21.875) (total time: 10002ms): Dec 12 14:10:31 crc kubenswrapper[5113]: Trace[645346060]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (14:10:31.878) Dec 12 14:10:31 crc kubenswrapper[5113]: Trace[645346060]: [10.002153535s] [10.002153535s] END Dec 12 14:10:31 crc kubenswrapper[5113]: E1212 14:10:31.878161 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 14:10:32 crc kubenswrapper[5113]: I1212 14:10:32.027707 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 14:10:32 crc kubenswrapper[5113]: I1212 14:10:32.027836 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 14:10:32 crc kubenswrapper[5113]: I1212 14:10:32.034214 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 14:10:32 crc kubenswrapper[5113]: I1212 14:10:32.034298 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 14:10:32 crc kubenswrapper[5113]: I1212 14:10:32.438338 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]log ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]etcd ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/generic-apiserver-start-informers ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/priority-and-fairness-filter ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/start-apiextensions-informers ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/start-apiextensions-controllers ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/crd-informer-synced ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/start-system-namespaces-controller ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 12 14:10:32 crc kubenswrapper[5113]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 12 14:10:32 crc kubenswrapper[5113]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/bootstrap-controller ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/start-kube-aggregator-informers ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/apiservice-registration-controller ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/apiservice-discovery-controller ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]autoregister-completion ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/apiservice-openapi-controller ok Dec 12 14:10:32 crc kubenswrapper[5113]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 12 14:10:32 crc kubenswrapper[5113]: livez check failed Dec 12 14:10:32 crc kubenswrapper[5113]: I1212 14:10:32.438434 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:10:32 crc kubenswrapper[5113]: I1212 14:10:32.809662 5113 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 12 14:10:32 crc kubenswrapper[5113]: I1212 14:10:32.809739 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 12 14:10:33 crc kubenswrapper[5113]: E1212 14:10:33.610543 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Dec 12 14:10:33 crc kubenswrapper[5113]: E1212 14:10:33.779832 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 14:10:33 crc kubenswrapper[5113]: I1212 14:10:33.872525 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:33 crc kubenswrapper[5113]: I1212 14:10:33.874330 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:33 crc kubenswrapper[5113]: I1212 14:10:33.874392 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:33 crc kubenswrapper[5113]: I1212 14:10:33.874404 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:33 crc kubenswrapper[5113]: I1212 14:10:33.874429 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:33 crc kubenswrapper[5113]: E1212 14:10:33.884744 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:10:34 crc kubenswrapper[5113]: E1212 14:10:34.667169 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 14:10:35 crc kubenswrapper[5113]: I1212 14:10:35.196359 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 12 14:10:35 crc kubenswrapper[5113]: I1212 14:10:35.196670 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:35 crc kubenswrapper[5113]: I1212 14:10:35.197669 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:35 crc kubenswrapper[5113]: I1212 14:10:35.197767 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:35 crc kubenswrapper[5113]: I1212 14:10:35.197785 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:35 crc kubenswrapper[5113]: E1212 14:10:35.198382 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:35 crc kubenswrapper[5113]: I1212 14:10:35.210722 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 12 14:10:35 crc kubenswrapper[5113]: I1212 14:10:35.598621 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:35 crc kubenswrapper[5113]: I1212 14:10:35.599219 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:35 crc kubenswrapper[5113]: I1212 14:10:35.599266 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:35 crc kubenswrapper[5113]: I1212 14:10:35.599277 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:35 crc kubenswrapper[5113]: E1212 14:10:35.599691 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.042573 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.047735 5113 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.070056 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36942->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.070432 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36942->192.168.126.11:17697: read: connection reset by peer" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.394326 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.435846 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.436465 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.436765 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.436821 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.439338 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.439398 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.439412 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:37 crc kubenswrapper[5113]: E1212 14:10:37.439976 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.442173 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:37 crc kubenswrapper[5113]: E1212 14:10:37.533943 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.607409 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.609634 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="794652cedc86dbd83b8c1657c21d6c5b7936316e2fd7e1da60d63d9b4d567ce5" exitCode=255 Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.609689 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"794652cedc86dbd83b8c1657c21d6c5b7936316e2fd7e1da60d63d9b4d567ce5"} Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.609939 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.610913 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.610959 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.610971 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:37 crc kubenswrapper[5113]: E1212 14:10:37.611430 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:37 crc kubenswrapper[5113]: I1212 14:10:37.611748 5113 scope.go:117] "RemoveContainer" containerID="794652cedc86dbd83b8c1657c21d6c5b7936316e2fd7e1da60d63d9b4d567ce5" Dec 12 14:10:37 crc kubenswrapper[5113]: E1212 14:10:37.614590 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 14:10:37 crc kubenswrapper[5113]: E1212 14:10:37.664500 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:10:38 crc kubenswrapper[5113]: I1212 14:10:38.401996 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:38 crc kubenswrapper[5113]: I1212 14:10:38.615045 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 14:10:38 crc kubenswrapper[5113]: I1212 14:10:38.617476 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d6abc82ec092b67f6223f549601d587e5b40f4af22810e7086f68ed9492bbd15"} Dec 12 14:10:38 crc kubenswrapper[5113]: I1212 14:10:38.617600 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:38 crc kubenswrapper[5113]: I1212 14:10:38.618307 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:38 crc kubenswrapper[5113]: I1212 14:10:38.618355 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:38 crc kubenswrapper[5113]: I1212 14:10:38.618366 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:38 crc kubenswrapper[5113]: E1212 14:10:38.618724 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:39 crc kubenswrapper[5113]: I1212 14:10:39.395413 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:39 crc kubenswrapper[5113]: I1212 14:10:39.620296 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:39 crc kubenswrapper[5113]: I1212 14:10:39.620388 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:39 crc kubenswrapper[5113]: I1212 14:10:39.621175 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:39 crc kubenswrapper[5113]: I1212 14:10:39.621210 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:39 crc kubenswrapper[5113]: I1212 14:10:39.621228 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:39 crc kubenswrapper[5113]: E1212 14:10:39.621610 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:39 crc kubenswrapper[5113]: I1212 14:10:39.818827 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:39 crc kubenswrapper[5113]: I1212 14:10:39.819140 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:39 crc kubenswrapper[5113]: I1212 14:10:39.821283 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:39 crc kubenswrapper[5113]: I1212 14:10:39.821330 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:39 crc kubenswrapper[5113]: I1212 14:10:39.821346 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:39 crc kubenswrapper[5113]: E1212 14:10:39.821714 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:39 crc kubenswrapper[5113]: I1212 14:10:39.825793 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.016044 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.285696 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.287044 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.287150 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.287174 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.287219 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.297174 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.393338 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.622504 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1cfdc64e0a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.399422474 +0000 UTC m=+0.234672301,LastTimestamp:2025-12-12 14:10:17.399422474 +0000 UTC m=+0.234672301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.624810 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.625465 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.627533 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d6abc82ec092b67f6223f549601d587e5b40f4af22810e7086f68ed9492bbd15" exitCode=255 Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.627660 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d6abc82ec092b67f6223f549601d587e5b40f4af22810e7086f68ed9492bbd15"} Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.627742 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.627760 5113 scope.go:117] "RemoveContainer" containerID="794652cedc86dbd83b8c1657c21d6c5b7936316e2fd7e1da60d63d9b4d567ce5" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.627786 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.628050 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02744e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.477934652 +0000 UTC m=+0.313184479,LastTimestamp:2025-12-12 14:10:17.477934652 +0000 UTC m=+0.313184479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.628584 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.628614 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.628634 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.628860 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.628906 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.628918 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.629015 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.629323 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:40 crc kubenswrapper[5113]: I1212 14:10:40.629580 5113 scope.go:117] "RemoveContainer" containerID="d6abc82ec092b67f6223f549601d587e5b40f4af22810e7086f68ed9492bbd15" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.629840 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.634441 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02755d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478004043 +0000 UTC m=+0.313253870,LastTimestamp:2025-12-12 14:10:17.478004043 +0000 UTC m=+0.313253870,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.638902 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d0276fdb9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478110649 +0000 UTC m=+0.313360476,LastTimestamp:2025-12-12 14:10:17.478110649 +0000 UTC m=+0.313360476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.645259 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d06889bf0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.546374128 +0000 UTC m=+0.381623955,LastTimestamp:2025-12-12 14:10:17.546374128 +0000 UTC m=+0.381623955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.651138 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02744e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02744e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.477934652 +0000 UTC m=+0.313184479,LastTimestamp:2025-12-12 14:10:17.584634205 +0000 UTC m=+0.419884032,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.655951 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02755d4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02755d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478004043 +0000 UTC m=+0.313253870,LastTimestamp:2025-12-12 14:10:17.58467314 +0000 UTC m=+0.419922967,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.661533 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d0276fdb9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d0276fdb9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478110649 +0000 UTC m=+0.313360476,LastTimestamp:2025-12-12 14:10:17.584688568 +0000 UTC m=+0.419938395,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.666577 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02744e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02744e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.477934652 +0000 UTC m=+0.313184479,LastTimestamp:2025-12-12 14:10:17.586438722 +0000 UTC m=+0.421688549,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.672007 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02755d4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02755d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478004043 +0000 UTC m=+0.313253870,LastTimestamp:2025-12-12 14:10:17.586459019 +0000 UTC m=+0.421708846,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.677656 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d0276fdb9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d0276fdb9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478110649 +0000 UTC m=+0.313360476,LastTimestamp:2025-12-12 14:10:17.586467688 +0000 UTC m=+0.421717515,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.682216 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02744e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02744e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.477934652 +0000 UTC m=+0.313184479,LastTimestamp:2025-12-12 14:10:17.587085122 +0000 UTC m=+0.422334959,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.686771 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02755d4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02755d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478004043 +0000 UTC m=+0.313253870,LastTimestamp:2025-12-12 14:10:17.587111429 +0000 UTC m=+0.422361256,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.691628 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d0276fdb9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d0276fdb9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478110649 +0000 UTC m=+0.313360476,LastTimestamp:2025-12-12 14:10:17.587155843 +0000 UTC m=+0.422405670,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.698229 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02744e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02744e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.477934652 +0000 UTC m=+0.313184479,LastTimestamp:2025-12-12 14:10:17.588325718 +0000 UTC m=+0.423575545,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.703922 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02755d4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02755d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478004043 +0000 UTC m=+0.313253870,LastTimestamp:2025-12-12 14:10:17.588347595 +0000 UTC m=+0.423597422,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.709921 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d0276fdb9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d0276fdb9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478110649 +0000 UTC m=+0.313360476,LastTimestamp:2025-12-12 14:10:17.588359003 +0000 UTC m=+0.423608830,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.715252 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02744e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02744e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.477934652 +0000 UTC m=+0.313184479,LastTimestamp:2025-12-12 14:10:17.588420546 +0000 UTC m=+0.423670373,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.720088 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02755d4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02755d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478004043 +0000 UTC m=+0.313253870,LastTimestamp:2025-12-12 14:10:17.588446493 +0000 UTC m=+0.423696320,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.726257 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d0276fdb9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d0276fdb9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478110649 +0000 UTC m=+0.313360476,LastTimestamp:2025-12-12 14:10:17.588457811 +0000 UTC m=+0.423707638,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.731911 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02744e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02744e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.477934652 +0000 UTC m=+0.313184479,LastTimestamp:2025-12-12 14:10:17.590256048 +0000 UTC m=+0.425505915,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.737594 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02755d4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02755d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478004043 +0000 UTC m=+0.313253870,LastTimestamp:2025-12-12 14:10:17.590285514 +0000 UTC m=+0.425535371,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.744323 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d0276fdb9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d0276fdb9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478110649 +0000 UTC m=+0.313360476,LastTimestamp:2025-12-12 14:10:17.590304492 +0000 UTC m=+0.425554359,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.749893 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02744e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02744e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.477934652 +0000 UTC m=+0.313184479,LastTimestamp:2025-12-12 14:10:17.590334748 +0000 UTC m=+0.425584575,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.755817 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18807d1d02755d4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18807d1d02755d4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:17.478004043 +0000 UTC m=+0.313253870,LastTimestamp:2025-12-12 14:10:17.590351356 +0000 UTC m=+0.425601183,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.762297 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d1d268f855d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.083698013 +0000 UTC m=+0.918947840,LastTimestamp:2025-12-12 14:10:18.083698013 +0000 UTC m=+0.918947840,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.767667 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18807d1d26a7070f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.085238543 +0000 UTC m=+0.920488370,LastTimestamp:2025-12-12 14:10:18.085238543 +0000 UTC m=+0.920488370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.772775 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d1d2715f8f7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.092509431 +0000 UTC m=+0.927759258,LastTimestamp:2025-12-12 14:10:18.092509431 +0000 UTC m=+0.927759258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.777160 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1d272bf4f0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.093950192 +0000 UTC m=+0.929200009,LastTimestamp:2025-12-12 14:10:18.093950192 +0000 UTC m=+0.929200009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.782334 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1d27b6d20c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.103050764 +0000 UTC m=+0.938300591,LastTimestamp:2025-12-12 14:10:18.103050764 +0000 UTC m=+0.938300591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.787087 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d1d4bac34d6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.706334934 +0000 UTC m=+1.541584761,LastTimestamp:2025-12-12 14:10:18.706334934 +0000 UTC m=+1.541584761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.795831 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d1d4bae3622 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.706466338 +0000 UTC m=+1.541716155,LastTimestamp:2025-12-12 14:10:18.706466338 +0000 UTC m=+1.541716155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.801786 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18807d1d4c25c4f7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.714301687 +0000 UTC m=+1.549551514,LastTimestamp:2025-12-12 14:10:18.714301687 +0000 UTC m=+1.549551514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.807619 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1d4c342108 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.71524276 +0000 UTC m=+1.550492587,LastTimestamp:2025-12-12 14:10:18.71524276 +0000 UTC m=+1.550492587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.812706 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1d4c34a692 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.715276946 +0000 UTC m=+1.550526773,LastTimestamp:2025-12-12 14:10:18.715276946 +0000 UTC m=+1.550526773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.817638 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d1d4c6d5ccf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.718993615 +0000 UTC m=+1.554243442,LastTimestamp:2025-12-12 14:10:18.718993615 +0000 UTC m=+1.554243442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.822863 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d1d4c817b9e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.720312222 +0000 UTC m=+1.555562049,LastTimestamp:2025-12-12 14:10:18.720312222 +0000 UTC m=+1.555562049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.828302 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d1d4ca12360 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.722386784 +0000 UTC m=+1.557636611,LastTimestamp:2025-12-12 14:10:18.722386784 +0000 UTC m=+1.557636611,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.837990 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1d4d803b20 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.737007392 +0000 UTC m=+1.572257239,LastTimestamp:2025-12-12 14:10:18.737007392 +0000 UTC m=+1.572257239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.843353 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1d4d9a6209 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.738721289 +0000 UTC m=+1.573971116,LastTimestamp:2025-12-12 14:10:18.738721289 +0000 UTC m=+1.573971116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.848486 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d1d5cae06f8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:18.991666936 +0000 UTC m=+1.826916773,LastTimestamp:2025-12-12 14:10:18.991666936 +0000 UTC m=+1.826916773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.854250 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d1d5d5a43b5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.002954677 +0000 UTC m=+1.838204504,LastTimestamp:2025-12-12 14:10:19.002954677 +0000 UTC m=+1.838204504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.860026 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d1d5d71f94d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.004508493 +0000 UTC m=+1.839758320,LastTimestamp:2025-12-12 14:10:19.004508493 +0000 UTC m=+1.839758320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.865777 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18807d1d6b58d99b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.237743003 +0000 UTC m=+2.072992830,LastTimestamp:2025-12-12 14:10:19.237743003 +0000 UTC m=+2.072992830,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.871189 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d1d73dcaf0c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.380600588 +0000 UTC m=+2.215850405,LastTimestamp:2025-12-12 14:10:19.380600588 +0000 UTC m=+2.215850405,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.875910 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d1d7469ad19 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.389840665 +0000 UTC m=+2.225090492,LastTimestamp:2025-12-12 14:10:19.389840665 +0000 UTC m=+2.225090492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.880517 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d1d747bdb73 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.391032179 +0000 UTC m=+2.226282006,LastTimestamp:2025-12-12 14:10:19.391032179 +0000 UTC m=+2.226282006,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.885233 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18807d1d7b660dc1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.507043777 +0000 UTC m=+2.342293604,LastTimestamp:2025-12-12 14:10:19.507043777 +0000 UTC m=+2.342293604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.887932 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d1d7b9c431d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.510596381 +0000 UTC m=+2.345846208,LastTimestamp:2025-12-12 14:10:19.510596381 +0000 UTC m=+2.345846208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.891209 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1d7bd78bc0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.5144816 +0000 UTC m=+2.349731447,LastTimestamp:2025-12-12 14:10:19.5144816 +0000 UTC m=+2.349731447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.893370 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1d7c80fc77 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.525586039 +0000 UTC m=+2.360835866,LastTimestamp:2025-12-12 14:10:19.525586039 +0000 UTC m=+2.360835866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.899448 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d1d8204447b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.618075771 +0000 UTC m=+2.453325598,LastTimestamp:2025-12-12 14:10:19.618075771 +0000 UTC m=+2.453325598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.904826 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d1d8397676b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.644495723 +0000 UTC m=+2.479745550,LastTimestamp:2025-12-12 14:10:19.644495723 +0000 UTC m=+2.479745550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.910225 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d1d8ba8facd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.779865293 +0000 UTC m=+2.615115120,LastTimestamp:2025-12-12 14:10:19.779865293 +0000 UTC m=+2.615115120,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.917052 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18807d1d8bb6f995 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.780782485 +0000 UTC m=+2.616032312,LastTimestamp:2025-12-12 14:10:19.780782485 +0000 UTC m=+2.616032312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.922649 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1d8bc1ed9a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.781500314 +0000 UTC m=+2.616750141,LastTimestamp:2025-12-12 14:10:19.781500314 +0000 UTC m=+2.616750141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.928186 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d1d8c7b601a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.793653786 +0000 UTC m=+2.628903613,LastTimestamp:2025-12-12 14:10:19.793653786 +0000 UTC m=+2.628903613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.933157 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d1d8c8d16bb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.794814651 +0000 UTC m=+2.630064478,LastTimestamp:2025-12-12 14:10:19.794814651 +0000 UTC m=+2.630064478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.938251 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18807d1d8cd1bfc9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.799314377 +0000 UTC m=+2.634564194,LastTimestamp:2025-12-12 14:10:19.799314377 +0000 UTC m=+2.634564194,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.943878 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1d8d422596 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.80668047 +0000 UTC m=+2.641930297,LastTimestamp:2025-12-12 14:10:19.80668047 +0000 UTC m=+2.641930297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.949395 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1d8d66d666 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.80908503 +0000 UTC m=+2.644334857,LastTimestamp:2025-12-12 14:10:19.80908503 +0000 UTC m=+2.644334857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.954699 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1d8edd228c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.833614988 +0000 UTC m=+2.668864815,LastTimestamp:2025-12-12 14:10:19.833614988 +0000 UTC m=+2.668864815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.958950 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1d8ef12c39 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:19.834928185 +0000 UTC m=+2.670178002,LastTimestamp:2025-12-12 14:10:19.834928185 +0000 UTC m=+2.670178002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.963381 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d1d99c42784 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.016527236 +0000 UTC m=+2.851777063,LastTimestamp:2025-12-12 14:10:20.016527236 +0000 UTC m=+2.851777063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.968067 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d1d9a77620b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.028273163 +0000 UTC m=+2.863522990,LastTimestamp:2025-12-12 14:10:20.028273163 +0000 UTC m=+2.863522990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.972811 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d1d9a87d2c5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.029350597 +0000 UTC m=+2.864600424,LastTimestamp:2025-12-12 14:10:20.029350597 +0000 UTC m=+2.864600424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.978387 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1d9e27eb98 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.09017436 +0000 UTC m=+2.925424187,LastTimestamp:2025-12-12 14:10:20.09017436 +0000 UTC m=+2.925424187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.983256 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1d9f03f7f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.104595442 +0000 UTC m=+2.939845269,LastTimestamp:2025-12-12 14:10:20.104595442 +0000 UTC m=+2.939845269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.988161 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1d9f2f5159 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.107436377 +0000 UTC m=+2.942686204,LastTimestamp:2025-12-12 14:10:20.107436377 +0000 UTC m=+2.942686204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.992675 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d1da97bc557 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.280218967 +0000 UTC m=+3.115468794,LastTimestamp:2025-12-12 14:10:20.280218967 +0000 UTC m=+3.115468794,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:40 crc kubenswrapper[5113]: E1212 14:10:40.997513 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18807d1daa6817e0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.295706592 +0000 UTC m=+3.130956419,LastTimestamp:2025-12-12 14:10:20.295706592 +0000 UTC m=+3.130956419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.002801 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1dab785307 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.313547527 +0000 UTC m=+3.148797354,LastTimestamp:2025-12-12 14:10:20.313547527 +0000 UTC m=+3.148797354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.007988 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1dac213fc0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.324618176 +0000 UTC m=+3.159868003,LastTimestamp:2025-12-12 14:10:20.324618176 +0000 UTC m=+3.159868003,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.012346 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1dac36fe1b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.326043163 +0000 UTC m=+3.161292990,LastTimestamp:2025-12-12 14:10:20.326043163 +0000 UTC m=+3.161292990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.016993 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1db829a253 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.526494291 +0000 UTC m=+3.361744118,LastTimestamp:2025-12-12 14:10:20.526494291 +0000 UTC m=+3.361744118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.021726 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1db880e41a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.532212762 +0000 UTC m=+3.367462589,LastTimestamp:2025-12-12 14:10:20.532212762 +0000 UTC m=+3.367462589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.026811 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1db949a72a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.545369898 +0000 UTC m=+3.380619725,LastTimestamp:2025-12-12 14:10:20.545369898 +0000 UTC m=+3.380619725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.032216 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1db971c113 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.547997971 +0000 UTC m=+3.383247798,LastTimestamp:2025-12-12 14:10:20.547997971 +0000 UTC m=+3.383247798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.037945 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1dc4be09d8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.737546712 +0000 UTC m=+3.572796549,LastTimestamp:2025-12-12 14:10:20.737546712 +0000 UTC m=+3.572796549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.042794 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1dc4bf3d63 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.737625443 +0000 UTC m=+3.572875270,LastTimestamp:2025-12-12 14:10:20.737625443 +0000 UTC m=+3.572875270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.048284 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1dc570c287 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.749259399 +0000 UTC m=+3.584509226,LastTimestamp:2025-12-12 14:10:20.749259399 +0000 UTC m=+3.584509226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.053590 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1dc57ce099 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.750053529 +0000 UTC m=+3.585303356,LastTimestamp:2025-12-12 14:10:20.750053529 +0000 UTC m=+3.585303356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.060569 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1df595f6f1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:21.557004017 +0000 UTC m=+4.392253834,LastTimestamp:2025-12-12 14:10:21.557004017 +0000 UTC m=+4.392253834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.065654 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e2825de86 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:22.40529575 +0000 UTC m=+5.240545577,LastTimestamp:2025-12-12 14:10:22.40529575 +0000 UTC m=+5.240545577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.070496 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e29f2964a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:22.435489354 +0000 UTC m=+5.270739181,LastTimestamp:2025-12-12 14:10:22.435489354 +0000 UTC m=+5.270739181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.075273 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e2a0d23ee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:22.43722955 +0000 UTC m=+5.272479387,LastTimestamp:2025-12-12 14:10:22.43722955 +0000 UTC m=+5.272479387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.081265 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e464cb866 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:22.911158374 +0000 UTC m=+5.746408201,LastTimestamp:2025-12-12 14:10:22.911158374 +0000 UTC m=+5.746408201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.086701 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e474791a9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:22.927597993 +0000 UTC m=+5.762847860,LastTimestamp:2025-12-12 14:10:22.927597993 +0000 UTC m=+5.762847860,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.092340 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e475b3a10 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:22.928886288 +0000 UTC m=+5.764136115,LastTimestamp:2025-12-12 14:10:22.928886288 +0000 UTC m=+5.764136115,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.096508 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e5125c8b6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:23.093156022 +0000 UTC m=+5.928405849,LastTimestamp:2025-12-12 14:10:23.093156022 +0000 UTC m=+5.928405849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.101632 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e51cf319f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:23.104258463 +0000 UTC m=+5.939508290,LastTimestamp:2025-12-12 14:10:23.104258463 +0000 UTC m=+5.939508290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.106104 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e51e8926e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:23.105921646 +0000 UTC m=+5.941171473,LastTimestamp:2025-12-12 14:10:23.105921646 +0000 UTC m=+5.941171473,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.111308 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e5e17ecce openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:23.310351566 +0000 UTC m=+6.145601393,LastTimestamp:2025-12-12 14:10:23.310351566 +0000 UTC m=+6.145601393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.116298 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e5ed13819 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:23.322495001 +0000 UTC m=+6.157744828,LastTimestamp:2025-12-12 14:10:23.322495001 +0000 UTC m=+6.157744828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.120194 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e5f0d5076 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:23.326433398 +0000 UTC m=+6.161683225,LastTimestamp:2025-12-12 14:10:23.326433398 +0000 UTC m=+6.161683225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.124891 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e6a955a64 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:23.519898212 +0000 UTC m=+6.355148029,LastTimestamp:2025-12-12 14:10:23.519898212 +0000 UTC m=+6.355148029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.129726 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18807d1e6b5860b3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:23.532679347 +0000 UTC m=+6.367929174,LastTimestamp:2025-12-12 14:10:23.532679347 +0000 UTC m=+6.367929174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.135834 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 14:10:41 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.18807d2065b16be0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 12 14:10:41 crc kubenswrapper[5113]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 14:10:41 crc kubenswrapper[5113]: Dec 12 14:10:41 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:32.027786208 +0000 UTC m=+14.863036035,LastTimestamp:2025-12-12 14:10:32.027786208 +0000 UTC m=+14.863036035,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 14:10:41 crc kubenswrapper[5113]: > Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.140456 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d2065b33de8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:32.027905512 +0000 UTC m=+14.863155349,LastTimestamp:2025-12-12 14:10:32.027905512 +0000 UTC m=+14.863155349,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.144784 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d2065b16be0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 14:10:41 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.18807d2065b16be0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 12 14:10:41 crc kubenswrapper[5113]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 14:10:41 crc kubenswrapper[5113]: Dec 12 14:10:41 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:32.027786208 +0000 UTC m=+14.863036035,LastTimestamp:2025-12-12 14:10:32.034270619 +0000 UTC m=+14.869520446,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 14:10:41 crc kubenswrapper[5113]: > Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.149503 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d2065b33de8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d2065b33de8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:32.027905512 +0000 UTC m=+14.863155349,LastTimestamp:2025-12-12 14:10:32.034325661 +0000 UTC m=+14.869575488,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.153415 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 14:10:41 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.18807d207e2af413 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Dec 12 14:10:41 crc kubenswrapper[5113]: body: [+]ping ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]log ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]etcd ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/generic-apiserver-start-informers ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/priority-and-fairness-filter ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/start-apiextensions-informers ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/start-apiextensions-controllers ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/crd-informer-synced ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/start-system-namespaces-controller ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 12 14:10:41 crc kubenswrapper[5113]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 12 14:10:41 crc kubenswrapper[5113]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/bootstrap-controller ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/start-kube-aggregator-informers ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/apiservice-registration-controller ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/apiservice-discovery-controller ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]autoregister-completion ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/apiservice-openapi-controller ok Dec 12 14:10:41 crc kubenswrapper[5113]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 12 14:10:41 crc kubenswrapper[5113]: livez check failed Dec 12 14:10:41 crc kubenswrapper[5113]: Dec 12 14:10:41 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:32.438404115 +0000 UTC m=+15.273653942,LastTimestamp:2025-12-12 14:10:32.438404115 +0000 UTC m=+15.273653942,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 14:10:41 crc kubenswrapper[5113]: > Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.157312 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d207e2bd5bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:32.438461887 +0000 UTC m=+15.273711714,LastTimestamp:2025-12-12 14:10:32.438461887 +0000 UTC m=+15.273711714,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.161340 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 12 14:10:41 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-controller-manager-crc.18807d20944cbd48 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 12 14:10:41 crc kubenswrapper[5113]: body: Dec 12 14:10:41 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:32.809717064 +0000 UTC m=+15.644966891,LastTimestamp:2025-12-12 14:10:32.809717064 +0000 UTC m=+15.644966891,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 14:10:41 crc kubenswrapper[5113]: > Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.166943 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18807d20944d647a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:32.809759866 +0000 UTC m=+15.645009693,LastTimestamp:2025-12-12 14:10:32.809759866 +0000 UTC m=+15.645009693,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.172750 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 14:10:41 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.18807d2192417276 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:36942->192.168.126.11:17697: read: connection reset by peer Dec 12 14:10:41 crc kubenswrapper[5113]: body: Dec 12 14:10:41 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:37.070389878 +0000 UTC m=+19.905639725,LastTimestamp:2025-12-12 14:10:37.070389878 +0000 UTC m=+19.905639725,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 14:10:41 crc kubenswrapper[5113]: > Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.177262 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d219256e007 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36942->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:37.071794183 +0000 UTC m=+19.907044030,LastTimestamp:2025-12-12 14:10:37.071794183 +0000 UTC m=+19.907044030,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.184529 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 14:10:41 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.18807d21a8188738 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 12 14:10:41 crc kubenswrapper[5113]: body: Dec 12 14:10:41 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:37.436806968 +0000 UTC m=+20.272056795,LastTimestamp:2025-12-12 14:10:37.436806968 +0000 UTC m=+20.272056795,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 14:10:41 crc kubenswrapper[5113]: > Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.189102 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d21a8191ab0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:37.43684472 +0000 UTC m=+20.272094557,LastTimestamp:2025-12-12 14:10:37.43684472 +0000 UTC m=+20.272094557,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.194504 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d1db971c113\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1db971c113 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.547997971 +0000 UTC m=+3.383247798,LastTimestamp:2025-12-12 14:10:37.613412174 +0000 UTC m=+20.448662001,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.199885 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d1dc4bf3d63\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1dc4bf3d63 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.737625443 +0000 UTC m=+3.572875270,LastTimestamp:2025-12-12 14:10:37.931086682 +0000 UTC m=+20.766336519,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.204135 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d1dc570c287\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1dc570c287 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.749259399 +0000 UTC m=+3.584509226,LastTimestamp:2025-12-12 14:10:37.946000796 +0000 UTC m=+20.781250623,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.209711 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d226669a66d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:40.629794413 +0000 UTC m=+23.465044240,LastTimestamp:2025-12-12 14:10:40.629794413 +0000 UTC m=+23.465044240,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:41 crc kubenswrapper[5113]: I1212 14:10:41.392566 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:41 crc kubenswrapper[5113]: I1212 14:10:41.633300 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 14:10:41 crc kubenswrapper[5113]: I1212 14:10:41.636064 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:41 crc kubenswrapper[5113]: I1212 14:10:41.637538 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:41 crc kubenswrapper[5113]: I1212 14:10:41.637729 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:41 crc kubenswrapper[5113]: I1212 14:10:41.638035 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.638757 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:41 crc kubenswrapper[5113]: I1212 14:10:41.639214 5113 scope.go:117] "RemoveContainer" containerID="d6abc82ec092b67f6223f549601d587e5b40f4af22810e7086f68ed9492bbd15" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.639637 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:10:41 crc kubenswrapper[5113]: E1212 14:10:41.645915 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d226669a66d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d226669a66d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:40.629794413 +0000 UTC m=+23.465044240,LastTimestamp:2025-12-12 14:10:41.63958568 +0000 UTC m=+24.474835507,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:42 crc kubenswrapper[5113]: I1212 14:10:42.393629 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:43 crc kubenswrapper[5113]: I1212 14:10:43.393457 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:44 crc kubenswrapper[5113]: I1212 14:10:44.393816 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:45 crc kubenswrapper[5113]: I1212 14:10:45.393980 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:45 crc kubenswrapper[5113]: E1212 14:10:45.487291 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 14:10:46 crc kubenswrapper[5113]: I1212 14:10:46.389393 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:46 crc kubenswrapper[5113]: E1212 14:10:46.656318 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 14:10:46 crc kubenswrapper[5113]: E1212 14:10:46.924245 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 14:10:47 crc kubenswrapper[5113]: E1212 14:10:47.025932 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:10:47 crc kubenswrapper[5113]: I1212 14:10:47.297917 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:47 crc kubenswrapper[5113]: I1212 14:10:47.299878 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:47 crc kubenswrapper[5113]: I1212 14:10:47.299928 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:47 crc kubenswrapper[5113]: I1212 14:10:47.299941 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:47 crc kubenswrapper[5113]: I1212 14:10:47.299983 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:47 crc kubenswrapper[5113]: E1212 14:10:47.309415 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:10:47 crc kubenswrapper[5113]: I1212 14:10:47.395716 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:47 crc kubenswrapper[5113]: E1212 14:10:47.665685 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:10:48 crc kubenswrapper[5113]: I1212 14:10:48.394283 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:49 crc kubenswrapper[5113]: I1212 14:10:49.393017 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:49 crc kubenswrapper[5113]: I1212 14:10:49.573883 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:10:49 crc kubenswrapper[5113]: I1212 14:10:49.574212 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:49 crc kubenswrapper[5113]: I1212 14:10:49.575141 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:49 crc kubenswrapper[5113]: I1212 14:10:49.575202 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:49 crc kubenswrapper[5113]: I1212 14:10:49.575215 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:49 crc kubenswrapper[5113]: E1212 14:10:49.575703 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:10:49 crc kubenswrapper[5113]: I1212 14:10:49.576059 5113 scope.go:117] "RemoveContainer" containerID="d6abc82ec092b67f6223f549601d587e5b40f4af22810e7086f68ed9492bbd15" Dec 12 14:10:49 crc kubenswrapper[5113]: E1212 14:10:49.576360 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:10:49 crc kubenswrapper[5113]: E1212 14:10:49.581199 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d226669a66d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d226669a66d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:40.629794413 +0000 UTC m=+23.465044240,LastTimestamp:2025-12-12 14:10:49.576316785 +0000 UTC m=+32.411566612,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:10:49 crc kubenswrapper[5113]: E1212 14:10:49.670774 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 14:10:50 crc kubenswrapper[5113]: I1212 14:10:50.394605 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:51 crc kubenswrapper[5113]: I1212 14:10:51.393870 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:52 crc kubenswrapper[5113]: I1212 14:10:52.396440 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:53 crc kubenswrapper[5113]: I1212 14:10:53.399064 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:54 crc kubenswrapper[5113]: E1212 14:10:54.030880 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:10:54 crc kubenswrapper[5113]: I1212 14:10:54.309986 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:10:54 crc kubenswrapper[5113]: I1212 14:10:54.311193 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:10:54 crc kubenswrapper[5113]: I1212 14:10:54.311266 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:10:54 crc kubenswrapper[5113]: I1212 14:10:54.311284 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:10:54 crc kubenswrapper[5113]: I1212 14:10:54.311313 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:10:54 crc kubenswrapper[5113]: E1212 14:10:54.326691 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:10:54 crc kubenswrapper[5113]: I1212 14:10:54.395178 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:55 crc kubenswrapper[5113]: I1212 14:10:55.394954 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:56 crc kubenswrapper[5113]: I1212 14:10:56.394279 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:57 crc kubenswrapper[5113]: I1212 14:10:57.394426 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:57 crc kubenswrapper[5113]: E1212 14:10:57.666375 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:10:58 crc kubenswrapper[5113]: I1212 14:10:58.393665 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:10:59 crc kubenswrapper[5113]: I1212 14:10:59.393467 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:00 crc kubenswrapper[5113]: I1212 14:11:00.395265 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:00 crc kubenswrapper[5113]: I1212 14:11:00.482513 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:00 crc kubenswrapper[5113]: I1212 14:11:00.484006 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:00 crc kubenswrapper[5113]: I1212 14:11:00.484198 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:00 crc kubenswrapper[5113]: I1212 14:11:00.484291 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:00 crc kubenswrapper[5113]: E1212 14:11:00.484994 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:00 crc kubenswrapper[5113]: I1212 14:11:00.485467 5113 scope.go:117] "RemoveContainer" containerID="d6abc82ec092b67f6223f549601d587e5b40f4af22810e7086f68ed9492bbd15" Dec 12 14:11:00 crc kubenswrapper[5113]: E1212 14:11:00.497651 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d1db971c113\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1db971c113 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.547997971 +0000 UTC m=+3.383247798,LastTimestamp:2025-12-12 14:11:00.487187836 +0000 UTC m=+43.322437663,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:01 crc kubenswrapper[5113]: E1212 14:11:01.038283 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:11:01 crc kubenswrapper[5113]: E1212 14:11:01.054813 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d1dc4bf3d63\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1dc4bf3d63 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.737625443 +0000 UTC m=+3.572875270,LastTimestamp:2025-12-12 14:11:01.04892618 +0000 UTC m=+43.884176007,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:01 crc kubenswrapper[5113]: E1212 14:11:01.081699 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d1dc570c287\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d1dc570c287 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:20.749259399 +0000 UTC m=+3.584509226,LastTimestamp:2025-12-12 14:11:01.075978378 +0000 UTC m=+43.911228205,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:01 crc kubenswrapper[5113]: I1212 14:11:01.327416 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:01 crc kubenswrapper[5113]: I1212 14:11:01.328550 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:01 crc kubenswrapper[5113]: I1212 14:11:01.328600 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:01 crc kubenswrapper[5113]: I1212 14:11:01.328614 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:01 crc kubenswrapper[5113]: I1212 14:11:01.328640 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:01 crc kubenswrapper[5113]: E1212 14:11:01.338706 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:11:01 crc kubenswrapper[5113]: I1212 14:11:01.395067 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:01 crc kubenswrapper[5113]: I1212 14:11:01.696574 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 14:11:01 crc kubenswrapper[5113]: I1212 14:11:01.700366 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"8a63e441148bde4a45b4c5f4bc61e221f433d9429502beeec0688db5c015e88c"} Dec 12 14:11:01 crc kubenswrapper[5113]: I1212 14:11:01.700706 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:01 crc kubenswrapper[5113]: I1212 14:11:01.701568 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:01 crc kubenswrapper[5113]: I1212 14:11:01.701710 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:01 crc kubenswrapper[5113]: I1212 14:11:01.701797 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:01 crc kubenswrapper[5113]: E1212 14:11:01.702440 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:02 crc kubenswrapper[5113]: I1212 14:11:02.394683 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:02 crc kubenswrapper[5113]: I1212 14:11:02.705270 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 14:11:02 crc kubenswrapper[5113]: I1212 14:11:02.706304 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 14:11:02 crc kubenswrapper[5113]: I1212 14:11:02.708320 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8a63e441148bde4a45b4c5f4bc61e221f433d9429502beeec0688db5c015e88c" exitCode=255 Dec 12 14:11:02 crc kubenswrapper[5113]: I1212 14:11:02.708375 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"8a63e441148bde4a45b4c5f4bc61e221f433d9429502beeec0688db5c015e88c"} Dec 12 14:11:02 crc kubenswrapper[5113]: I1212 14:11:02.708621 5113 scope.go:117] "RemoveContainer" containerID="d6abc82ec092b67f6223f549601d587e5b40f4af22810e7086f68ed9492bbd15" Dec 12 14:11:02 crc kubenswrapper[5113]: I1212 14:11:02.708938 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:02 crc kubenswrapper[5113]: I1212 14:11:02.711164 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:02 crc kubenswrapper[5113]: I1212 14:11:02.711236 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:02 crc kubenswrapper[5113]: I1212 14:11:02.711260 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:02 crc kubenswrapper[5113]: E1212 14:11:02.711676 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:02 crc kubenswrapper[5113]: I1212 14:11:02.711931 5113 scope.go:117] "RemoveContainer" containerID="8a63e441148bde4a45b4c5f4bc61e221f433d9429502beeec0688db5c015e88c" Dec 12 14:11:02 crc kubenswrapper[5113]: E1212 14:11:02.712198 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:02 crc kubenswrapper[5113]: E1212 14:11:02.717021 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d226669a66d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d226669a66d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:40.629794413 +0000 UTC m=+23.465044240,LastTimestamp:2025-12-12 14:11:02.712168891 +0000 UTC m=+45.547418718,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:03 crc kubenswrapper[5113]: I1212 14:11:03.393226 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:03 crc kubenswrapper[5113]: I1212 14:11:03.714285 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 14:11:04 crc kubenswrapper[5113]: I1212 14:11:04.393712 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:05 crc kubenswrapper[5113]: I1212 14:11:05.392700 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:06 crc kubenswrapper[5113]: I1212 14:11:06.394845 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:06 crc kubenswrapper[5113]: E1212 14:11:06.891579 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 14:11:07 crc kubenswrapper[5113]: E1212 14:11:07.352412 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 14:11:07 crc kubenswrapper[5113]: I1212 14:11:07.393391 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:07 crc kubenswrapper[5113]: E1212 14:11:07.667239 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:11:08 crc kubenswrapper[5113]: E1212 14:11:08.043997 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:11:08 crc kubenswrapper[5113]: I1212 14:11:08.339543 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:08 crc kubenswrapper[5113]: I1212 14:11:08.340735 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:08 crc kubenswrapper[5113]: I1212 14:11:08.340782 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:08 crc kubenswrapper[5113]: I1212 14:11:08.340795 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:08 crc kubenswrapper[5113]: I1212 14:11:08.340822 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:08 crc kubenswrapper[5113]: E1212 14:11:08.349900 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:11:08 crc kubenswrapper[5113]: I1212 14:11:08.390312 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:09 crc kubenswrapper[5113]: I1212 14:11:09.393715 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:09 crc kubenswrapper[5113]: I1212 14:11:09.573886 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:11:09 crc kubenswrapper[5113]: I1212 14:11:09.574188 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:09 crc kubenswrapper[5113]: I1212 14:11:09.575638 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:09 crc kubenswrapper[5113]: I1212 14:11:09.575793 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:09 crc kubenswrapper[5113]: I1212 14:11:09.575896 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:09 crc kubenswrapper[5113]: E1212 14:11:09.576448 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:09 crc kubenswrapper[5113]: I1212 14:11:09.576962 5113 scope.go:117] "RemoveContainer" containerID="8a63e441148bde4a45b4c5f4bc61e221f433d9429502beeec0688db5c015e88c" Dec 12 14:11:09 crc kubenswrapper[5113]: E1212 14:11:09.577309 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:09 crc kubenswrapper[5113]: E1212 14:11:09.582439 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d226669a66d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d226669a66d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:40.629794413 +0000 UTC m=+23.465044240,LastTimestamp:2025-12-12 14:11:09.577274619 +0000 UTC m=+52.412524446,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:10 crc kubenswrapper[5113]: E1212 14:11:10.216871 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 14:11:10 crc kubenswrapper[5113]: I1212 14:11:10.394640 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:11 crc kubenswrapper[5113]: I1212 14:11:11.394692 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:11 crc kubenswrapper[5113]: I1212 14:11:11.701014 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:11:11 crc kubenswrapper[5113]: I1212 14:11:11.701517 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:11 crc kubenswrapper[5113]: I1212 14:11:11.703054 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:11 crc kubenswrapper[5113]: I1212 14:11:11.703152 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:11 crc kubenswrapper[5113]: I1212 14:11:11.703173 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:11 crc kubenswrapper[5113]: E1212 14:11:11.703740 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:11 crc kubenswrapper[5113]: I1212 14:11:11.704072 5113 scope.go:117] "RemoveContainer" containerID="8a63e441148bde4a45b4c5f4bc61e221f433d9429502beeec0688db5c015e88c" Dec 12 14:11:11 crc kubenswrapper[5113]: E1212 14:11:11.704412 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:11 crc kubenswrapper[5113]: E1212 14:11:11.710941 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18807d226669a66d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18807d226669a66d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:10:40.629794413 +0000 UTC m=+23.465044240,LastTimestamp:2025-12-12 14:11:11.704365523 +0000 UTC m=+54.539615350,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:11:12 crc kubenswrapper[5113]: I1212 14:11:12.394303 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:13 crc kubenswrapper[5113]: I1212 14:11:13.308031 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:11:13 crc kubenswrapper[5113]: I1212 14:11:13.308260 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:13 crc kubenswrapper[5113]: I1212 14:11:13.309478 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:13 crc kubenswrapper[5113]: I1212 14:11:13.309518 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:13 crc kubenswrapper[5113]: I1212 14:11:13.309529 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:13 crc kubenswrapper[5113]: E1212 14:11:13.309860 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:13 crc kubenswrapper[5113]: I1212 14:11:13.394746 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:14 crc kubenswrapper[5113]: I1212 14:11:14.394999 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:15 crc kubenswrapper[5113]: E1212 14:11:15.049834 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:11:15 crc kubenswrapper[5113]: E1212 14:11:15.150096 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 14:11:15 crc kubenswrapper[5113]: I1212 14:11:15.350270 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:15 crc kubenswrapper[5113]: I1212 14:11:15.351299 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:15 crc kubenswrapper[5113]: I1212 14:11:15.351447 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:15 crc kubenswrapper[5113]: I1212 14:11:15.351561 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:15 crc kubenswrapper[5113]: I1212 14:11:15.351663 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:15 crc kubenswrapper[5113]: E1212 14:11:15.361209 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:11:15 crc kubenswrapper[5113]: I1212 14:11:15.395470 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:16 crc kubenswrapper[5113]: I1212 14:11:16.394511 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:17 crc kubenswrapper[5113]: I1212 14:11:17.395331 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:17 crc kubenswrapper[5113]: E1212 14:11:17.667707 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:11:18 crc kubenswrapper[5113]: I1212 14:11:18.396485 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:19 crc kubenswrapper[5113]: I1212 14:11:19.394588 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:20 crc kubenswrapper[5113]: I1212 14:11:20.393543 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:21 crc kubenswrapper[5113]: I1212 14:11:21.393745 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:22 crc kubenswrapper[5113]: E1212 14:11:22.056313 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 14:11:22 crc kubenswrapper[5113]: I1212 14:11:22.361391 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:22 crc kubenswrapper[5113]: I1212 14:11:22.362835 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:22 crc kubenswrapper[5113]: I1212 14:11:22.362884 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:22 crc kubenswrapper[5113]: I1212 14:11:22.362894 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:22 crc kubenswrapper[5113]: I1212 14:11:22.362918 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:22 crc kubenswrapper[5113]: E1212 14:11:22.373449 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 14:11:22 crc kubenswrapper[5113]: I1212 14:11:22.393836 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 14:11:23 crc kubenswrapper[5113]: I1212 14:11:23.126503 5113 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-rzg4m" Dec 12 14:11:23 crc kubenswrapper[5113]: I1212 14:11:23.132637 5113 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-rzg4m" Dec 12 14:11:23 crc kubenswrapper[5113]: I1212 14:11:23.197058 5113 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 12 14:11:23 crc kubenswrapper[5113]: I1212 14:11:23.309001 5113 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 12 14:11:24 crc kubenswrapper[5113]: I1212 14:11:24.133937 5113 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-11 14:06:23 +0000 UTC" deadline="2026-01-03 10:18:33.210533867 +0000 UTC" Dec 12 14:11:24 crc kubenswrapper[5113]: I1212 14:11:24.134021 5113 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="524h7m9.076517515s" Dec 12 14:11:24 crc kubenswrapper[5113]: I1212 14:11:24.482493 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:24 crc kubenswrapper[5113]: I1212 14:11:24.483888 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:24 crc kubenswrapper[5113]: I1212 14:11:24.483935 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:24 crc kubenswrapper[5113]: I1212 14:11:24.483948 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:24 crc kubenswrapper[5113]: E1212 14:11:24.484477 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:24 crc kubenswrapper[5113]: I1212 14:11:24.484783 5113 scope.go:117] "RemoveContainer" containerID="8a63e441148bde4a45b4c5f4bc61e221f433d9429502beeec0688db5c015e88c" Dec 12 14:11:25 crc kubenswrapper[5113]: I1212 14:11:25.775185 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 14:11:25 crc kubenswrapper[5113]: I1212 14:11:25.778829 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b77d3f8a0aa844d1d9bce00a4fbc928f96ec80e137240a6c83676a75400166f5"} Dec 12 14:11:25 crc kubenswrapper[5113]: I1212 14:11:25.779102 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:25 crc kubenswrapper[5113]: I1212 14:11:25.779739 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:25 crc kubenswrapper[5113]: I1212 14:11:25.779773 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:25 crc kubenswrapper[5113]: I1212 14:11:25.779785 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:25 crc kubenswrapper[5113]: E1212 14:11:25.780207 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:26 crc kubenswrapper[5113]: I1212 14:11:26.783688 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 14:11:26 crc kubenswrapper[5113]: I1212 14:11:26.784448 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 14:11:26 crc kubenswrapper[5113]: I1212 14:11:26.786711 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b77d3f8a0aa844d1d9bce00a4fbc928f96ec80e137240a6c83676a75400166f5" exitCode=255 Dec 12 14:11:26 crc kubenswrapper[5113]: I1212 14:11:26.786820 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b77d3f8a0aa844d1d9bce00a4fbc928f96ec80e137240a6c83676a75400166f5"} Dec 12 14:11:26 crc kubenswrapper[5113]: I1212 14:11:26.786867 5113 scope.go:117] "RemoveContainer" containerID="8a63e441148bde4a45b4c5f4bc61e221f433d9429502beeec0688db5c015e88c" Dec 12 14:11:26 crc kubenswrapper[5113]: I1212 14:11:26.787481 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:26 crc kubenswrapper[5113]: I1212 14:11:26.788050 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:26 crc kubenswrapper[5113]: I1212 14:11:26.788089 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:26 crc kubenswrapper[5113]: I1212 14:11:26.788099 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:26 crc kubenswrapper[5113]: E1212 14:11:26.788489 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:26 crc kubenswrapper[5113]: I1212 14:11:26.788738 5113 scope.go:117] "RemoveContainer" containerID="b77d3f8a0aa844d1d9bce00a4fbc928f96ec80e137240a6c83676a75400166f5" Dec 12 14:11:26 crc kubenswrapper[5113]: E1212 14:11:26.788966 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:27 crc kubenswrapper[5113]: E1212 14:11:27.668788 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:11:27 crc kubenswrapper[5113]: I1212 14:11:27.793638 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.373766 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.375223 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.375291 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.375306 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.375478 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.385637 5113 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.385909 5113 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.385944 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.391389 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.391441 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.391453 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.391471 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.391484 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:29Z","lastTransitionTime":"2025-12-12T14:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.403724 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1dc7520-c661-426a-968f-51bfb5017ad6\\\",\\\"systemUUID\\\":\\\"c314c9b7-f73b-4b1a-9a4f-2e5666868333\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.410555 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.410601 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.410611 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.410625 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.410653 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:29Z","lastTransitionTime":"2025-12-12T14:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.420634 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1dc7520-c661-426a-968f-51bfb5017ad6\\\",\\\"systemUUID\\\":\\\"c314c9b7-f73b-4b1a-9a4f-2e5666868333\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.428189 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.428235 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.428249 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.428268 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.428282 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:29Z","lastTransitionTime":"2025-12-12T14:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.439538 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1dc7520-c661-426a-968f-51bfb5017ad6\\\",\\\"systemUUID\\\":\\\"c314c9b7-f73b-4b1a-9a4f-2e5666868333\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.447190 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.447234 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.447246 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.447261 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.447271 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:29Z","lastTransitionTime":"2025-12-12T14:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.456805 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1dc7520-c661-426a-968f-51bfb5017ad6\\\",\\\"systemUUID\\\":\\\"c314c9b7-f73b-4b1a-9a4f-2e5666868333\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.457267 5113 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.457383 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.558087 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.574304 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.574792 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.575934 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.575983 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.575995 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.576505 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:29 crc kubenswrapper[5113]: I1212 14:11:29.576846 5113 scope.go:117] "RemoveContainer" containerID="b77d3f8a0aa844d1d9bce00a4fbc928f96ec80e137240a6c83676a75400166f5" Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.577104 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.659141 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.760043 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.861252 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:29 crc kubenswrapper[5113]: E1212 14:11:29.961675 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:30 crc kubenswrapper[5113]: E1212 14:11:30.061998 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:30 crc kubenswrapper[5113]: E1212 14:11:30.162352 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:30 crc kubenswrapper[5113]: E1212 14:11:30.263036 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:30 crc kubenswrapper[5113]: E1212 14:11:30.363508 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:30 crc kubenswrapper[5113]: E1212 14:11:30.463614 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:30 crc kubenswrapper[5113]: E1212 14:11:30.563904 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:30 crc kubenswrapper[5113]: E1212 14:11:30.664451 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:30 crc kubenswrapper[5113]: E1212 14:11:30.764834 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:30 crc kubenswrapper[5113]: E1212 14:11:30.866038 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:30 crc kubenswrapper[5113]: E1212 14:11:30.966521 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:31 crc kubenswrapper[5113]: E1212 14:11:31.067177 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:31 crc kubenswrapper[5113]: E1212 14:11:31.167738 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:31 crc kubenswrapper[5113]: E1212 14:11:31.268428 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:31 crc kubenswrapper[5113]: E1212 14:11:31.368837 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:31 crc kubenswrapper[5113]: E1212 14:11:31.469196 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:31 crc kubenswrapper[5113]: E1212 14:11:31.570204 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:31 crc kubenswrapper[5113]: E1212 14:11:31.671226 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:31 crc kubenswrapper[5113]: E1212 14:11:31.772424 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:31 crc kubenswrapper[5113]: E1212 14:11:31.872637 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:31 crc kubenswrapper[5113]: E1212 14:11:31.972815 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:32 crc kubenswrapper[5113]: E1212 14:11:32.073409 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:32 crc kubenswrapper[5113]: E1212 14:11:32.173524 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:32 crc kubenswrapper[5113]: E1212 14:11:32.274573 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:32 crc kubenswrapper[5113]: E1212 14:11:32.375215 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:32 crc kubenswrapper[5113]: E1212 14:11:32.475939 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:32 crc kubenswrapper[5113]: E1212 14:11:32.576369 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:32 crc kubenswrapper[5113]: E1212 14:11:32.676975 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:32 crc kubenswrapper[5113]: E1212 14:11:32.777993 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:32 crc kubenswrapper[5113]: I1212 14:11:32.790489 5113 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:11:32 crc kubenswrapper[5113]: E1212 14:11:32.878693 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:32 crc kubenswrapper[5113]: E1212 14:11:32.979440 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:33 crc kubenswrapper[5113]: E1212 14:11:33.080611 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:33 crc kubenswrapper[5113]: E1212 14:11:33.181077 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:33 crc kubenswrapper[5113]: E1212 14:11:33.282158 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:33 crc kubenswrapper[5113]: E1212 14:11:33.382814 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:33 crc kubenswrapper[5113]: E1212 14:11:33.482944 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:33 crc kubenswrapper[5113]: E1212 14:11:33.584084 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:33 crc kubenswrapper[5113]: E1212 14:11:33.684249 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:33 crc kubenswrapper[5113]: E1212 14:11:33.785063 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:33 crc kubenswrapper[5113]: E1212 14:11:33.885936 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:33 crc kubenswrapper[5113]: E1212 14:11:33.986201 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:34 crc kubenswrapper[5113]: E1212 14:11:34.087087 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:34 crc kubenswrapper[5113]: E1212 14:11:34.187521 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:34 crc kubenswrapper[5113]: E1212 14:11:34.288458 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:34 crc kubenswrapper[5113]: E1212 14:11:34.388651 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:34 crc kubenswrapper[5113]: I1212 14:11:34.482651 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:34 crc kubenswrapper[5113]: I1212 14:11:34.483603 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:34 crc kubenswrapper[5113]: I1212 14:11:34.483631 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:34 crc kubenswrapper[5113]: I1212 14:11:34.483640 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:34 crc kubenswrapper[5113]: E1212 14:11:34.484027 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:34 crc kubenswrapper[5113]: E1212 14:11:34.489248 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:34 crc kubenswrapper[5113]: E1212 14:11:34.589361 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:34 crc kubenswrapper[5113]: E1212 14:11:34.689838 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:34 crc kubenswrapper[5113]: E1212 14:11:34.790898 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:34 crc kubenswrapper[5113]: E1212 14:11:34.892008 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:34 crc kubenswrapper[5113]: E1212 14:11:34.992932 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:35 crc kubenswrapper[5113]: E1212 14:11:35.093590 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:35 crc kubenswrapper[5113]: E1212 14:11:35.194692 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:35 crc kubenswrapper[5113]: E1212 14:11:35.295363 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:35 crc kubenswrapper[5113]: E1212 14:11:35.395553 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:35 crc kubenswrapper[5113]: E1212 14:11:35.496499 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:35 crc kubenswrapper[5113]: E1212 14:11:35.597599 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:35 crc kubenswrapper[5113]: E1212 14:11:35.698203 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:35 crc kubenswrapper[5113]: I1212 14:11:35.780379 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:11:35 crc kubenswrapper[5113]: I1212 14:11:35.780661 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:35 crc kubenswrapper[5113]: I1212 14:11:35.781676 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:35 crc kubenswrapper[5113]: I1212 14:11:35.781722 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:35 crc kubenswrapper[5113]: I1212 14:11:35.781732 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:35 crc kubenswrapper[5113]: E1212 14:11:35.782265 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:35 crc kubenswrapper[5113]: I1212 14:11:35.782527 5113 scope.go:117] "RemoveContainer" containerID="b77d3f8a0aa844d1d9bce00a4fbc928f96ec80e137240a6c83676a75400166f5" Dec 12 14:11:35 crc kubenswrapper[5113]: E1212 14:11:35.782796 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:35 crc kubenswrapper[5113]: E1212 14:11:35.798905 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:35 crc kubenswrapper[5113]: E1212 14:11:35.899984 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:36 crc kubenswrapper[5113]: E1212 14:11:36.001060 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:36 crc kubenswrapper[5113]: E1212 14:11:36.101823 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:36 crc kubenswrapper[5113]: E1212 14:11:36.201953 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:36 crc kubenswrapper[5113]: E1212 14:11:36.302437 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:36 crc kubenswrapper[5113]: E1212 14:11:36.403442 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:36 crc kubenswrapper[5113]: E1212 14:11:36.504068 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:36 crc kubenswrapper[5113]: E1212 14:11:36.604679 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:36 crc kubenswrapper[5113]: E1212 14:11:36.705264 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:36 crc kubenswrapper[5113]: E1212 14:11:36.805418 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:36 crc kubenswrapper[5113]: E1212 14:11:36.905888 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:36 crc kubenswrapper[5113]: I1212 14:11:36.992385 5113 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:11:37 crc kubenswrapper[5113]: E1212 14:11:37.006421 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:37 crc kubenswrapper[5113]: E1212 14:11:37.106886 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:37 crc kubenswrapper[5113]: E1212 14:11:37.207197 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:37 crc kubenswrapper[5113]: E1212 14:11:37.307465 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:37 crc kubenswrapper[5113]: E1212 14:11:37.407667 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:37 crc kubenswrapper[5113]: E1212 14:11:37.507846 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:37 crc kubenswrapper[5113]: E1212 14:11:37.608760 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:37 crc kubenswrapper[5113]: E1212 14:11:37.669671 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:11:37 crc kubenswrapper[5113]: E1212 14:11:37.709425 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:37 crc kubenswrapper[5113]: E1212 14:11:37.810028 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:37 crc kubenswrapper[5113]: E1212 14:11:37.910957 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:38 crc kubenswrapper[5113]: E1212 14:11:38.011966 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:38 crc kubenswrapper[5113]: E1212 14:11:38.112816 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:38 crc kubenswrapper[5113]: E1212 14:11:38.212990 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:38 crc kubenswrapper[5113]: E1212 14:11:38.313201 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:38 crc kubenswrapper[5113]: E1212 14:11:38.413632 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:38 crc kubenswrapper[5113]: E1212 14:11:38.513926 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:38 crc kubenswrapper[5113]: E1212 14:11:38.614339 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:38 crc kubenswrapper[5113]: E1212 14:11:38.715217 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:38 crc kubenswrapper[5113]: E1212 14:11:38.816363 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:38 crc kubenswrapper[5113]: E1212 14:11:38.916655 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.016802 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.118223 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.218826 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.320016 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.420557 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.521442 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.549658 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.554961 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.555020 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.555040 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.555060 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.555073 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:39Z","lastTransitionTime":"2025-12-12T14:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.564491 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1dc7520-c661-426a-968f-51bfb5017ad6\\\",\\\"systemUUID\\\":\\\"c314c9b7-f73b-4b1a-9a4f-2e5666868333\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.568162 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.568217 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.568230 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.568249 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.568262 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:39Z","lastTransitionTime":"2025-12-12T14:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.577361 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1dc7520-c661-426a-968f-51bfb5017ad6\\\",\\\"systemUUID\\\":\\\"c314c9b7-f73b-4b1a-9a4f-2e5666868333\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.580566 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.580634 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.580647 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.580665 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.580677 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:39Z","lastTransitionTime":"2025-12-12T14:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.590593 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1dc7520-c661-426a-968f-51bfb5017ad6\\\",\\\"systemUUID\\\":\\\"c314c9b7-f73b-4b1a-9a4f-2e5666868333\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.597973 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.598052 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.598067 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.598086 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:39 crc kubenswrapper[5113]: I1212 14:11:39.598097 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:39Z","lastTransitionTime":"2025-12-12T14:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.608673 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1dc7520-c661-426a-968f-51bfb5017ad6\\\",\\\"systemUUID\\\":\\\"c314c9b7-f73b-4b1a-9a4f-2e5666868333\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.608850 5113 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.622184 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.723165 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.823678 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:39 crc kubenswrapper[5113]: E1212 14:11:39.924661 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:40 crc kubenswrapper[5113]: E1212 14:11:40.025636 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:40 crc kubenswrapper[5113]: E1212 14:11:40.126723 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:40 crc kubenswrapper[5113]: E1212 14:11:40.227445 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:40 crc kubenswrapper[5113]: E1212 14:11:40.328337 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:40 crc kubenswrapper[5113]: E1212 14:11:40.429401 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:40 crc kubenswrapper[5113]: E1212 14:11:40.530098 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:40 crc kubenswrapper[5113]: E1212 14:11:40.630720 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:40 crc kubenswrapper[5113]: E1212 14:11:40.731162 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:40 crc kubenswrapper[5113]: E1212 14:11:40.832159 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:40 crc kubenswrapper[5113]: E1212 14:11:40.932681 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:41 crc kubenswrapper[5113]: E1212 14:11:41.032891 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:41 crc kubenswrapper[5113]: E1212 14:11:41.133660 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:41 crc kubenswrapper[5113]: E1212 14:11:41.234627 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:41 crc kubenswrapper[5113]: E1212 14:11:41.335791 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:41 crc kubenswrapper[5113]: E1212 14:11:41.436802 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:41 crc kubenswrapper[5113]: E1212 14:11:41.537570 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:41 crc kubenswrapper[5113]: E1212 14:11:41.637929 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:41 crc kubenswrapper[5113]: E1212 14:11:41.738087 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:41 crc kubenswrapper[5113]: E1212 14:11:41.838486 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:41 crc kubenswrapper[5113]: E1212 14:11:41.938960 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:41 crc kubenswrapper[5113]: I1212 14:11:41.952885 5113 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:11:42 crc kubenswrapper[5113]: E1212 14:11:42.040023 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:42 crc kubenswrapper[5113]: E1212 14:11:42.141150 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:42 crc kubenswrapper[5113]: E1212 14:11:42.242200 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:42 crc kubenswrapper[5113]: E1212 14:11:42.343010 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:42 crc kubenswrapper[5113]: E1212 14:11:42.443095 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:42 crc kubenswrapper[5113]: E1212 14:11:42.543543 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:42 crc kubenswrapper[5113]: E1212 14:11:42.643939 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:42 crc kubenswrapper[5113]: E1212 14:11:42.744275 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:42 crc kubenswrapper[5113]: E1212 14:11:42.844934 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:42 crc kubenswrapper[5113]: E1212 14:11:42.945573 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:43 crc kubenswrapper[5113]: E1212 14:11:43.046588 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:43 crc kubenswrapper[5113]: E1212 14:11:43.147805 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:43 crc kubenswrapper[5113]: E1212 14:11:43.248846 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:43 crc kubenswrapper[5113]: E1212 14:11:43.349954 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:43 crc kubenswrapper[5113]: E1212 14:11:43.450384 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:43 crc kubenswrapper[5113]: E1212 14:11:43.550770 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:43 crc kubenswrapper[5113]: E1212 14:11:43.651448 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:43 crc kubenswrapper[5113]: E1212 14:11:43.751968 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:43 crc kubenswrapper[5113]: E1212 14:11:43.853019 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:43 crc kubenswrapper[5113]: E1212 14:11:43.953757 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:44 crc kubenswrapper[5113]: E1212 14:11:44.054909 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:44 crc kubenswrapper[5113]: E1212 14:11:44.155680 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:44 crc kubenswrapper[5113]: E1212 14:11:44.256210 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:44 crc kubenswrapper[5113]: E1212 14:11:44.356986 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:44 crc kubenswrapper[5113]: E1212 14:11:44.457507 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:44 crc kubenswrapper[5113]: E1212 14:11:44.557912 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:44 crc kubenswrapper[5113]: E1212 14:11:44.658975 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:44 crc kubenswrapper[5113]: E1212 14:11:44.759203 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:44 crc kubenswrapper[5113]: E1212 14:11:44.860194 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:44 crc kubenswrapper[5113]: E1212 14:11:44.960588 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:45 crc kubenswrapper[5113]: E1212 14:11:45.060726 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:45 crc kubenswrapper[5113]: E1212 14:11:45.161144 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:45 crc kubenswrapper[5113]: E1212 14:11:45.261531 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:45 crc kubenswrapper[5113]: E1212 14:11:45.362022 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:45 crc kubenswrapper[5113]: E1212 14:11:45.462833 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:45 crc kubenswrapper[5113]: E1212 14:11:45.563345 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:45 crc kubenswrapper[5113]: E1212 14:11:45.664345 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:45 crc kubenswrapper[5113]: E1212 14:11:45.765178 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:45 crc kubenswrapper[5113]: E1212 14:11:45.865866 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:45 crc kubenswrapper[5113]: E1212 14:11:45.966637 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:46 crc kubenswrapper[5113]: E1212 14:11:46.067321 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:46 crc kubenswrapper[5113]: E1212 14:11:46.168319 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:46 crc kubenswrapper[5113]: E1212 14:11:46.268516 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:46 crc kubenswrapper[5113]: E1212 14:11:46.369028 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:46 crc kubenswrapper[5113]: E1212 14:11:46.470072 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:46 crc kubenswrapper[5113]: E1212 14:11:46.570769 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:46 crc kubenswrapper[5113]: E1212 14:11:46.671193 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:46 crc kubenswrapper[5113]: E1212 14:11:46.771933 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:46 crc kubenswrapper[5113]: E1212 14:11:46.872879 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:46 crc kubenswrapper[5113]: E1212 14:11:46.973047 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:47 crc kubenswrapper[5113]: E1212 14:11:47.074089 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:47 crc kubenswrapper[5113]: E1212 14:11:47.174716 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:47 crc kubenswrapper[5113]: E1212 14:11:47.275763 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:47 crc kubenswrapper[5113]: E1212 14:11:47.376204 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:47 crc kubenswrapper[5113]: E1212 14:11:47.477369 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:47 crc kubenswrapper[5113]: E1212 14:11:47.577790 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:47 crc kubenswrapper[5113]: E1212 14:11:47.670283 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 14:11:47 crc kubenswrapper[5113]: E1212 14:11:47.678353 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:47 crc kubenswrapper[5113]: E1212 14:11:47.778821 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:47 crc kubenswrapper[5113]: E1212 14:11:47.879478 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:47 crc kubenswrapper[5113]: E1212 14:11:47.979989 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:48 crc kubenswrapper[5113]: E1212 14:11:48.081067 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:48 crc kubenswrapper[5113]: E1212 14:11:48.181491 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:48 crc kubenswrapper[5113]: E1212 14:11:48.281607 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:48 crc kubenswrapper[5113]: E1212 14:11:48.382592 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:48 crc kubenswrapper[5113]: I1212 14:11:48.482926 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 14:11:48 crc kubenswrapper[5113]: E1212 14:11:48.483629 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:48 crc kubenswrapper[5113]: I1212 14:11:48.484464 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:48 crc kubenswrapper[5113]: I1212 14:11:48.484516 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:48 crc kubenswrapper[5113]: I1212 14:11:48.484529 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:48 crc kubenswrapper[5113]: E1212 14:11:48.485254 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 14:11:48 crc kubenswrapper[5113]: I1212 14:11:48.485548 5113 scope.go:117] "RemoveContainer" containerID="b77d3f8a0aa844d1d9bce00a4fbc928f96ec80e137240a6c83676a75400166f5" Dec 12 14:11:48 crc kubenswrapper[5113]: E1212 14:11:48.485804 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:48 crc kubenswrapper[5113]: E1212 14:11:48.584090 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:48 crc kubenswrapper[5113]: E1212 14:11:48.684475 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:48 crc kubenswrapper[5113]: E1212 14:11:48.785437 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:48 crc kubenswrapper[5113]: E1212 14:11:48.886510 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:48 crc kubenswrapper[5113]: E1212 14:11:48.987401 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.034364 5113 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.090171 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.090236 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.090257 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.090282 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.090300 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.102513 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.113314 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.120438 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.193143 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.193204 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.193230 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.193269 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.193288 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.221912 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.296001 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.296074 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.296087 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.296107 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.296143 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.325161 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.398402 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.398460 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.398472 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.398486 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.398499 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.427362 5113 apiserver.go:52] "Watching apiserver" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.437461 5113 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.438109 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jvcjp","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-node-identity/network-node-identity-dgvkt","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l","openshift-ovn-kubernetes/ovnkube-node-4qrsn","openshift-dns/node-resolver-7mzm7","openshift-multus/multus-hnmf9","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-operator/iptables-alerter-5jnd7","openshift-etcd/etcd-crc","openshift-image-registry/node-ca-gr95v","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-machine-config-operator/machine-config-daemon-5dn52","openshift-multus/multus-additional-cni-plugins-xzdkb"] Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.440183 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.441464 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.441545 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.442310 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.442572 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.442610 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.444259 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.444811 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.447989 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.451615 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.452539 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.452578 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.452683 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.452803 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.453050 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.454698 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.462694 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.475873 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.487316 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.499081 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.500568 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.500635 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.500703 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.500727 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.500743 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.509317 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.519327 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.568747 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.569300 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.569510 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.569653 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.569775 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:50.069752064 +0000 UTC m=+92.905001881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.569899 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.570075 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.570305 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.570506 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4adfdae7-6f5a-41b1-b923-d57003475a95-tmp-dir\") pod \"node-resolver-7mzm7\" (UID: \"4adfdae7-6f5a-41b1-b923-d57003475a95\") " pod="openshift-dns/node-resolver-7mzm7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.570687 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sxzm\" (UniqueName: \"kubernetes.io/projected/4adfdae7-6f5a-41b1-b923-d57003475a95-kube-api-access-5sxzm\") pod \"node-resolver-7mzm7\" (UID: \"4adfdae7-6f5a-41b1-b923-d57003475a95\") " pod="openshift-dns/node-resolver-7mzm7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.570854 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4adfdae7-6f5a-41b1-b923-d57003475a95-hosts-file\") pod \"node-resolver-7mzm7\" (UID: \"4adfdae7-6f5a-41b1-b923-d57003475a95\") " pod="openshift-dns/node-resolver-7mzm7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.570996 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.571234 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.571409 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.571560 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.571713 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.571878 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.572044 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.572227 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.570406 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.572622 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:50.072589647 +0000 UTC m=+92.907839514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.570072 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.572914 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.573017 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.585663 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.585730 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.585758 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.585895 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:50.085858961 +0000 UTC m=+92.921108828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.592357 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.592387 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.592400 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.592456 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:50.092443436 +0000 UTC m=+92.927693383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.602897 5113 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.603762 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.603816 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.603833 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.603853 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.603866 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.608787 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.608865 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.609060 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.609748 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.610702 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.631167 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.631307 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.631347 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7mzm7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.633826 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.634312 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.634620 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.642959 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.654474 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.668194 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.673508 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.673594 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4adfdae7-6f5a-41b1-b923-d57003475a95-tmp-dir\") pod \"node-resolver-7mzm7\" (UID: \"4adfdae7-6f5a-41b1-b923-d57003475a95\") " pod="openshift-dns/node-resolver-7mzm7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.673630 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5sxzm\" (UniqueName: \"kubernetes.io/projected/4adfdae7-6f5a-41b1-b923-d57003475a95-kube-api-access-5sxzm\") pod \"node-resolver-7mzm7\" (UID: \"4adfdae7-6f5a-41b1-b923-d57003475a95\") " pod="openshift-dns/node-resolver-7mzm7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.673671 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4adfdae7-6f5a-41b1-b923-d57003475a95-hosts-file\") pod \"node-resolver-7mzm7\" (UID: \"4adfdae7-6f5a-41b1-b923-d57003475a95\") " pod="openshift-dns/node-resolver-7mzm7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.673681 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.673769 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4adfdae7-6f5a-41b1-b923-d57003475a95-hosts-file\") pod \"node-resolver-7mzm7\" (UID: \"4adfdae7-6f5a-41b1-b923-d57003475a95\") " pod="openshift-dns/node-resolver-7mzm7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.674210 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.674274 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.674384 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4adfdae7-6f5a-41b1-b923-d57003475a95-tmp-dir\") pod \"node-resolver-7mzm7\" (UID: \"4adfdae7-6f5a-41b1-b923-d57003475a95\") " pod="openshift-dns/node-resolver-7mzm7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.678386 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.686848 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.691844 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sxzm\" (UniqueName: \"kubernetes.io/projected/4adfdae7-6f5a-41b1-b923-d57003475a95-kube-api-access-5sxzm\") pod \"node-resolver-7mzm7\" (UID: \"4adfdae7-6f5a-41b1-b923-d57003475a95\") " pod="openshift-dns/node-resolver-7mzm7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.696647 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.705365 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.706193 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.706267 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.706277 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.706293 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.706303 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.714835 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.724051 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.726240 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.726280 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.726289 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.726304 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.726356 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.733599 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.737403 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1dc7520-c661-426a-968f-51bfb5017ad6\\\",\\\"systemUUID\\\":\\\"c314c9b7-f73b-4b1a-9a4f-2e5666868333\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.741345 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7mzm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4adfdae7-6f5a-41b1-b923-d57003475a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sxzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7mzm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.741678 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.741753 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.741765 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.741781 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.741791 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.752338 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.752769 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1dc7520-c661-426a-968f-51bfb5017ad6\\\",\\\"systemUUID\\\":\\\"c314c9b7-f73b-4b1a-9a4f-2e5666868333\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.755946 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.755995 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.756008 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.756028 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.756040 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.759709 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.766682 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.766541 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1dc7520-c661-426a-968f-51bfb5017ad6\\\",\\\"systemUUID\\\":\\\"c314c9b7-f73b-4b1a-9a4f-2e5666868333\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.765891 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.774338 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.774384 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.774397 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.774417 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.774431 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.776482 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-cni-bin\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.776499 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.776598 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-config\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.776668 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-systemd\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.776713 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-slash\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.776754 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-kubelet\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.776784 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-script-lib\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.776832 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-run-netns\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.776860 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-node-log\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.776927 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-systemd-units\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.776970 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-openvswitch\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.776996 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-cni-netd\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.777031 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.777068 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-env-overrides\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.777095 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhr7d\" (UniqueName: \"kubernetes.io/projected/74da26a1-71e9-47b4-bb18-cef44b9df055-kube-api-access-zhr7d\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.777165 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-ovn\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.777435 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-log-socket\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.777474 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-run-ovn-kubernetes\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.777498 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74da26a1-71e9-47b4-bb18-cef44b9df055-ovn-node-metrics-cert\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.777538 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-etc-openvswitch\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.777601 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-var-lib-openvswitch\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.785932 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1dc7520-c661-426a-968f-51bfb5017ad6\\\",\\\"systemUUID\\\":\\\"c314c9b7-f73b-4b1a-9a4f-2e5666868333\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.789393 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.789458 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.789476 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.789502 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.789520 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.798469 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1dc7520-c661-426a-968f-51bfb5017ad6\\\",\\\"systemUUID\\\":\\\"c314c9b7-f73b-4b1a-9a4f-2e5666868333\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.798656 5113 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.808145 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.808189 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.808203 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.808222 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.808234 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878412 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-var-lib-openvswitch\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878475 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-cni-bin\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878500 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-config\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878526 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-systemd\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878526 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-var-lib-openvswitch\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.878608 5113 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: object "openshift-ovn-kubernetes"/"ovnkube-config" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.878666 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-config podName:74da26a1-71e9-47b4-bb18-cef44b9df055 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:50.378646235 +0000 UTC m=+93.213896052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-config") pod "ovnkube-node-4qrsn" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055") : object "openshift-ovn-kubernetes"/"ovnkube-config" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878690 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-slash\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878712 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-kubelet\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878717 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-systemd\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878774 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-slash\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878736 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-script-lib\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878814 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-run-netns\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878835 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-node-log\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878834 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-kubelet\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878854 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-systemd-units\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.878843 5113 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: object "openshift-ovn-kubernetes"/"ovnkube-script-lib" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878876 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-openvswitch\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878901 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-cni-netd\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878911 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-systemd-units\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878917 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878950 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878955 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-node-log\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878980 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-openvswitch\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.878988 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-script-lib podName:74da26a1-71e9-47b4-bb18-cef44b9df055 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:50.378948955 +0000 UTC m=+93.214198782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-script-lib") pod "ovnkube-node-4qrsn" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055") : object "openshift-ovn-kubernetes"/"ovnkube-script-lib" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.878873 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-run-netns\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.879014 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-cni-netd\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.879037 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-env-overrides\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.879083 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhr7d\" (UniqueName: \"kubernetes.io/projected/74da26a1-71e9-47b4-bb18-cef44b9df055-kube-api-access-zhr7d\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.879090 5113 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: object "openshift-ovn-kubernetes"/"env-overrides" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.879104 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-cni-bin\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.879115 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-ovn\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.879183 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-env-overrides podName:74da26a1-71e9-47b4-bb18-cef44b9df055 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:50.379169733 +0000 UTC m=+93.214419580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-env-overrides") pod "ovnkube-node-4qrsn" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055") : object "openshift-ovn-kubernetes"/"env-overrides" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.879185 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-ovn\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.879627 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-log-socket\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.879653 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-run-ovn-kubernetes\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.879702 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-log-socket\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.879752 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74da26a1-71e9-47b4-bb18-cef44b9df055-ovn-node-metrics-cert\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.879813 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-run-ovn-kubernetes\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.879871 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-etc-openvswitch\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.879956 5113 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: object "openshift-ovn-kubernetes"/"ovn-node-metrics-cert" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.879994 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74da26a1-71e9-47b4-bb18-cef44b9df055-ovn-node-metrics-cert podName:74da26a1-71e9-47b4-bb18-cef44b9df055 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:50.37998572 +0000 UTC m=+93.215235547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/74da26a1-71e9-47b4-bb18-cef44b9df055-ovn-node-metrics-cert") pod "ovnkube-node-4qrsn" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055") : object "openshift-ovn-kubernetes"/"ovn-node-metrics-cert" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.880052 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-etc-openvswitch\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.896841 5113 projected.go:289] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: object "openshift-ovn-kubernetes"/"kube-root-ca.crt" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.896894 5113 projected.go:289] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: object "openshift-ovn-kubernetes"/"openshift-service-ca.crt" not registered Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.896911 5113 projected.go:194] Error preparing data for projected volume kube-api-access-zhr7d for pod openshift-ovn-kubernetes/ovnkube-node-4qrsn: [object "openshift-ovn-kubernetes"/"kube-root-ca.crt" not registered, object "openshift-ovn-kubernetes"/"openshift-service-ca.crt" not registered] Dec 12 14:11:49 crc kubenswrapper[5113]: E1212 14:11:49.896997 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74da26a1-71e9-47b4-bb18-cef44b9df055-kube-api-access-zhr7d podName:74da26a1-71e9-47b4-bb18-cef44b9df055 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:50.396968755 +0000 UTC m=+93.232218582 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zhr7d" (UniqueName: "kubernetes.io/projected/74da26a1-71e9-47b4-bb18-cef44b9df055-kube-api-access-zhr7d") pod "ovnkube-node-4qrsn" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055") : [object "openshift-ovn-kubernetes"/"kube-root-ca.crt" not registered, object "openshift-ovn-kubernetes"/"openshift-service-ca.crt" not registered] Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.910807 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.910863 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.910877 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.910895 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.910908 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:49Z","lastTransitionTime":"2025-12-12T14:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:49 crc kubenswrapper[5113]: I1212 14:11:49.947723 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7mzm7" Dec 12 14:11:50 crc kubenswrapper[5113]: W1212 14:11:50.007720 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-ff226db24a9b2489978fb399a44743893c9f6d9993ebe742bbbf445bd27edb66 WatchSource:0}: Error finding container ff226db24a9b2489978fb399a44743893c9f6d9993ebe742bbbf445bd27edb66: Status 404 returned error can't find the container with id ff226db24a9b2489978fb399a44743893c9f6d9993ebe742bbbf445bd27edb66 Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.009588 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.013049 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.013063 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.013557 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.013849 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.013901 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.014282 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.014465 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.016880 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.016938 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.016954 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.016975 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.016986 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:50Z","lastTransitionTime":"2025-12-12T14:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.021675 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.033199 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.040986 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7mzm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4adfdae7-6f5a-41b1-b923-d57003475a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sxzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7mzm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.055217 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74da26a1-71e9-47b4-bb18-cef44b9df055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4qrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.066062 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.073905 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.074368 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.077815 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.081632 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.081722 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.081794 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.081833 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.081876 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:51.081855852 +0000 UTC m=+93.917105679 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.081894 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:51.081886543 +0000 UTC m=+93.917136370 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.088966 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.089485 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.091608 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.093661 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.093712 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.095408 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.095430 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.109072 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.120377 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.120438 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.120451 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.120470 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.120484 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:50Z","lastTransitionTime":"2025-12-12T14:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.122051 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.135052 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.147184 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-hnmf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61630ce-4572-40eb-b245-937168ad79d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc7vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hnmf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.159984 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.171621 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.180559 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182096 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-var-lib-kubelet\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182258 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-multus-socket-dir-parent\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182316 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e7fc971e-760a-4530-b3b2-7975699b4383-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-9hc2l\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182343 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwrnz\" (UniqueName: \"kubernetes.io/projected/e7fc971e-760a-4530-b3b2-7975699b4383-kube-api-access-lwrnz\") pod \"ovnkube-control-plane-57b78d8988-9hc2l\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182367 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-multus-conf-dir\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182397 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz9gg\" (UniqueName: \"kubernetes.io/projected/357b225a-0c71-40ba-ac24-d769a9ff3f07-kube-api-access-lz9gg\") pod \"network-metrics-daemon-jvcjp\" (UID: \"357b225a-0c71-40ba-ac24-d769a9ff3f07\") " pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182429 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182453 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e7fc971e-760a-4530-b3b2-7975699b4383-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-9hc2l\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182473 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs\") pod \"network-metrics-daemon-jvcjp\" (UID: \"357b225a-0c71-40ba-ac24-d769a9ff3f07\") " pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182499 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-hostroot\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182534 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e7fc971e-760a-4530-b3b2-7975699b4383-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-9hc2l\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182575 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-os-release\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182599 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-run-netns\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182633 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-var-lib-cni-bin\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182657 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-run-multus-certs\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182680 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-etc-kubernetes\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182709 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182734 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-cnibin\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182759 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-var-lib-cni-multus\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182789 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f61630ce-4572-40eb-b245-937168ad79d4-cni-binary-copy\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182812 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f61630ce-4572-40eb-b245-937168ad79d4-multus-daemon-config\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182833 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc7vk\" (UniqueName: \"kubernetes.io/projected/f61630ce-4572-40eb-b245-937168ad79d4-kube-api-access-xc7vk\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182869 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-run-k8s-cni-cncf-io\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182895 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-system-cni-dir\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.182915 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-multus-cni-dir\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.183089 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.183146 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.183163 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.183217 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:51.183198855 +0000 UTC m=+94.018448682 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.183339 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.183360 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.183370 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.183435 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:51.183425363 +0000 UTC m=+94.018675190 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.193018 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.201011 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7mzm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4adfdae7-6f5a-41b1-b923-d57003475a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sxzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7mzm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.201456 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.204249 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.204345 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.219150 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74da26a1-71e9-47b4-bb18-cef44b9df055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4qrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.222810 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.222850 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.222859 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.222892 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.222903 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:50Z","lastTransitionTime":"2025-12-12T14:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.242448 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jvcjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357b225a-0c71-40ba-ac24-d769a9ff3f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz9gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz9gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jvcjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283075 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-run-k8s-cni-cncf-io\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283145 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-system-cni-dir\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283163 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-multus-cni-dir\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283184 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/66dabae3-7c42-4e9e-807b-faa04aeedc40-serviceca\") pod \"node-ca-gr95v\" (UID: \"66dabae3-7c42-4e9e-807b-faa04aeedc40\") " pod="openshift-image-registry/node-ca-gr95v" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283210 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-var-lib-kubelet\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283225 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66dabae3-7c42-4e9e-807b-faa04aeedc40-host\") pod \"node-ca-gr95v\" (UID: \"66dabae3-7c42-4e9e-807b-faa04aeedc40\") " pod="openshift-image-registry/node-ca-gr95v" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283245 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzcbg\" (UniqueName: \"kubernetes.io/projected/66dabae3-7c42-4e9e-807b-faa04aeedc40-kube-api-access-bzcbg\") pod \"node-ca-gr95v\" (UID: \"66dabae3-7c42-4e9e-807b-faa04aeedc40\") " pod="openshift-image-registry/node-ca-gr95v" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283308 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-multus-socket-dir-parent\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283337 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e7fc971e-760a-4530-b3b2-7975699b4383-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-9hc2l\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283355 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lwrnz\" (UniqueName: \"kubernetes.io/projected/e7fc971e-760a-4530-b3b2-7975699b4383-kube-api-access-lwrnz\") pod \"ovnkube-control-plane-57b78d8988-9hc2l\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283371 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-multus-conf-dir\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283394 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9gg\" (UniqueName: \"kubernetes.io/projected/357b225a-0c71-40ba-ac24-d769a9ff3f07-kube-api-access-lz9gg\") pod \"network-metrics-daemon-jvcjp\" (UID: \"357b225a-0c71-40ba-ac24-d769a9ff3f07\") " pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283470 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e7fc971e-760a-4530-b3b2-7975699b4383-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-9hc2l\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283488 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs\") pod \"network-metrics-daemon-jvcjp\" (UID: \"357b225a-0c71-40ba-ac24-d769a9ff3f07\") " pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283504 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-hostroot\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283527 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e7fc971e-760a-4530-b3b2-7975699b4383-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-9hc2l\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283554 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-os-release\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283570 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-run-netns\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283595 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-var-lib-cni-bin\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283613 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-run-multus-certs\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283643 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-etc-kubernetes\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283670 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-cnibin\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283684 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-var-lib-cni-multus\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283704 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f61630ce-4572-40eb-b245-937168ad79d4-cni-binary-copy\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283719 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f61630ce-4572-40eb-b245-937168ad79d4-multus-daemon-config\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.283740 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xc7vk\" (UniqueName: \"kubernetes.io/projected/f61630ce-4572-40eb-b245-937168ad79d4-kube-api-access-xc7vk\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.284088 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-run-k8s-cni-cncf-io\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.284174 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-system-cni-dir\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.284222 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-multus-cni-dir\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.284256 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-var-lib-kubelet\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.284300 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-multus-socket-dir-parent\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.284804 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e7fc971e-760a-4530-b3b2-7975699b4383-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-9hc2l\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.284966 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-multus-conf-dir\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.285510 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e7fc971e-760a-4530-b3b2-7975699b4383-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-9hc2l\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.285577 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.285620 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs podName:357b225a-0c71-40ba-ac24-d769a9ff3f07 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:50.785607655 +0000 UTC m=+93.620857482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs") pod "network-metrics-daemon-jvcjp" (UID: "357b225a-0c71-40ba-ac24-d769a9ff3f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.285759 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-hostroot\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.290082 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e7fc971e-760a-4530-b3b2-7975699b4383-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-9hc2l\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.290186 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-os-release\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.290216 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-run-netns\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.290253 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-var-lib-cni-bin\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.290280 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-run-multus-certs\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.290308 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-etc-kubernetes\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.290351 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-cnibin\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.290383 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f61630ce-4572-40eb-b245-937168ad79d4-host-var-lib-cni-multus\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.290993 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f61630ce-4572-40eb-b245-937168ad79d4-cni-binary-copy\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.291440 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f61630ce-4572-40eb-b245-937168ad79d4-multus-daemon-config\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.303667 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwrnz\" (UniqueName: \"kubernetes.io/projected/e7fc971e-760a-4530-b3b2-7975699b4383-kube-api-access-lwrnz\") pod \"ovnkube-control-plane-57b78d8988-9hc2l\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.307860 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc7vk\" (UniqueName: \"kubernetes.io/projected/f61630ce-4572-40eb-b245-937168ad79d4-kube-api-access-xc7vk\") pod \"multus-hnmf9\" (UID: \"f61630ce-4572-40eb-b245-937168ad79d4\") " pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.309790 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz9gg\" (UniqueName: \"kubernetes.io/projected/357b225a-0c71-40ba-ac24-d769a9ff3f07-kube-api-access-lz9gg\") pod \"network-metrics-daemon-jvcjp\" (UID: \"357b225a-0c71-40ba-ac24-d769a9ff3f07\") " pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.309750 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.320116 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.324777 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.324806 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.324816 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.324831 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.324841 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:50Z","lastTransitionTime":"2025-12-12T14:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.330990 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-hnmf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61630ce-4572-40eb-b245-937168ad79d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc7vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hnmf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.342785 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.354522 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.363305 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.374680 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.382457 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7mzm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4adfdae7-6f5a-41b1-b923-d57003475a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sxzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7mzm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.384702 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74da26a1-71e9-47b4-bb18-cef44b9df055-ovn-node-metrics-cert\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.384846 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-config\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.384917 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-script-lib\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.384944 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/66dabae3-7c42-4e9e-807b-faa04aeedc40-serviceca\") pod \"node-ca-gr95v\" (UID: \"66dabae3-7c42-4e9e-807b-faa04aeedc40\") " pod="openshift-image-registry/node-ca-gr95v" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.384972 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66dabae3-7c42-4e9e-807b-faa04aeedc40-host\") pod \"node-ca-gr95v\" (UID: \"66dabae3-7c42-4e9e-807b-faa04aeedc40\") " pod="openshift-image-registry/node-ca-gr95v" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.384992 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bzcbg\" (UniqueName: \"kubernetes.io/projected/66dabae3-7c42-4e9e-807b-faa04aeedc40-kube-api-access-bzcbg\") pod \"node-ca-gr95v\" (UID: \"66dabae3-7c42-4e9e-807b-faa04aeedc40\") " pod="openshift-image-registry/node-ca-gr95v" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.385023 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-env-overrides\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.385350 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66dabae3-7c42-4e9e-807b-faa04aeedc40-host\") pod \"node-ca-gr95v\" (UID: \"66dabae3-7c42-4e9e-807b-faa04aeedc40\") " pod="openshift-image-registry/node-ca-gr95v" Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.385407 5113 configmap.go:193] Couldn't get configMap openshift-image-registry/image-registry-certificates: object "openshift-image-registry"/"image-registry-certificates" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.385464 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66dabae3-7c42-4e9e-807b-faa04aeedc40-serviceca podName:66dabae3-7c42-4e9e-807b-faa04aeedc40 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:50.885440509 +0000 UTC m=+93.720690336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serviceca" (UniqueName: "kubernetes.io/configmap/66dabae3-7c42-4e9e-807b-faa04aeedc40-serviceca") pod "node-ca-gr95v" (UID: "66dabae3-7c42-4e9e-807b-faa04aeedc40") : object "openshift-image-registry"/"image-registry-certificates" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.386194 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-config\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.386188 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-script-lib\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.386274 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-env-overrides\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.389205 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74da26a1-71e9-47b4-bb18-cef44b9df055-ovn-node-metrics-cert\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.399375 5113 projected.go:289] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: object "openshift-image-registry"/"kube-root-ca.crt" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.399414 5113 projected.go:289] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: object "openshift-image-registry"/"openshift-service-ca.crt" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.399428 5113 projected.go:194] Error preparing data for projected volume kube-api-access-bzcbg for pod openshift-image-registry/node-ca-gr95v: [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.399497 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/66dabae3-7c42-4e9e-807b-faa04aeedc40-kube-api-access-bzcbg podName:66dabae3-7c42-4e9e-807b-faa04aeedc40 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:50.899477708 +0000 UTC m=+93.734727535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bzcbg" (UniqueName: "kubernetes.io/projected/66dabae3-7c42-4e9e-807b-faa04aeedc40-kube-api-access-bzcbg") pod "node-ca-gr95v" (UID: "66dabae3-7c42-4e9e-807b-faa04aeedc40") : [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.400114 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74da26a1-71e9-47b4-bb18-cef44b9df055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4qrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.402497 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hnmf9" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.406978 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jvcjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357b225a-0c71-40ba-ac24-d769a9ff3f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz9gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz9gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jvcjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.413673 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gr95v" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.415626 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.415948 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7fc971e-760a-4530-b3b2-7975699b4383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwrnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwrnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-9hc2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.416052 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.416531 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.416626 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 12 14:11:50 crc kubenswrapper[5113]: W1212 14:11:50.419914 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf61630ce_4572_40eb_b245_937168ad79d4.slice/crio-b3325e00e713b5bf19fcbbbd05ba05bba45d078ad6fa84f50ac8e59e48da7d00 WatchSource:0}: Error finding container b3325e00e713b5bf19fcbbbd05ba05bba45d078ad6fa84f50ac8e59e48da7d00: Status 404 returned error can't find the container with id b3325e00e713b5bf19fcbbbd05ba05bba45d078ad6fa84f50ac8e59e48da7d00 Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.427046 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.427101 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.427116 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.427165 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.427182 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:50Z","lastTransitionTime":"2025-12-12T14:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.427455 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.439759 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.451837 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-hnmf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61630ce-4572-40eb-b245-937168ad79d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc7vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hnmf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.462704 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.473749 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.484287 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.485391 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhr7d\" (UniqueName: \"kubernetes.io/projected/74da26a1-71e9-47b4-bb18-cef44b9df055-kube-api-access-zhr7d\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.489998 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhr7d\" (UniqueName: \"kubernetes.io/projected/74da26a1-71e9-47b4-bb18-cef44b9df055-kube-api-access-zhr7d\") pod \"ovnkube-node-4qrsn\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.492821 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gr95v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66dabae3-7c42-4e9e-807b-faa04aeedc40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzcbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gr95v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.514087 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.518585 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: W1212 14:11:50.526697 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7fc971e_760a_4530_b3b2_7975699b4383.slice/crio-7cf40a6b81709e43acc2a332891bb48f8f453561f5b3780f289b8ec58d55e9c6 WatchSource:0}: Error finding container 7cf40a6b81709e43acc2a332891bb48f8f453561f5b3780f289b8ec58d55e9c6: Status 404 returned error can't find the container with id 7cf40a6b81709e43acc2a332891bb48f8f453561f5b3780f289b8ec58d55e9c6 Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.528574 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.528617 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.528629 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.528642 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.528652 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:50Z","lastTransitionTime":"2025-12-12T14:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.554154 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-7mzm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4adfdae7-6f5a-41b1-b923-d57003475a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sxzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7mzm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.558615 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.586221 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68-rootfs\") pod \"machine-config-daemon-5dn52\" (UID: \"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68\") " pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.586264 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68-proxy-tls\") pod \"machine-config-daemon-5dn52\" (UID: \"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68\") " pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.586367 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68-mcd-auth-proxy-config\") pod \"machine-config-daemon-5dn52\" (UID: \"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68\") " pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.586450 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zpnv\" (UniqueName: \"kubernetes.io/projected/5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68-kube-api-access-7zpnv\") pod \"machine-config-daemon-5dn52\" (UID: \"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68\") " pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.589838 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.608984 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.628261 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.629660 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.630439 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.630476 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.630490 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.630507 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.630518 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:50Z","lastTransitionTime":"2025-12-12T14:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.649531 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.667610 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.667616 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.667767 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.669483 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.687090 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/043e6bda-e4ff-4fbf-8925-adf929d1af6f-os-release\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.687160 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmdt9\" (UniqueName: \"kubernetes.io/projected/043e6bda-e4ff-4fbf-8925-adf929d1af6f-kube-api-access-rmdt9\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.687209 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/043e6bda-e4ff-4fbf-8925-adf929d1af6f-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.687252 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7zpnv\" (UniqueName: \"kubernetes.io/projected/5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68-kube-api-access-7zpnv\") pod \"machine-config-daemon-5dn52\" (UID: \"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68\") " pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.687313 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68-rootfs\") pod \"machine-config-daemon-5dn52\" (UID: \"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68\") " pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.687337 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68-proxy-tls\") pod \"machine-config-daemon-5dn52\" (UID: \"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68\") " pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.687360 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/043e6bda-e4ff-4fbf-8925-adf929d1af6f-cnibin\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.687409 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/043e6bda-e4ff-4fbf-8925-adf929d1af6f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.687440 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/043e6bda-e4ff-4fbf-8925-adf929d1af6f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.687487 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68-mcd-auth-proxy-config\") pod \"machine-config-daemon-5dn52\" (UID: \"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68\") " pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.687511 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/043e6bda-e4ff-4fbf-8925-adf929d1af6f-system-cni-dir\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.687535 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/043e6bda-e4ff-4fbf-8925-adf929d1af6f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.692762 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68-proxy-tls\") pod \"machine-config-daemon-5dn52\" (UID: \"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68\") " pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.692967 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68-rootfs\") pod \"machine-config-daemon-5dn52\" (UID: \"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68\") " pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.693793 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68-mcd-auth-proxy-config\") pod \"machine-config-daemon-5dn52\" (UID: \"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68\") " pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.701395 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74da26a1-71e9-47b4-bb18-cef44b9df055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhr7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4qrsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.709734 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.729639 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.739391 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.739434 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.739445 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.739459 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.739470 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:50Z","lastTransitionTime":"2025-12-12T14:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.747816 5113 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.748379 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"f1d7fe783797989b0d7a831dcb57cd6f0ca9c91543e3109b611f75c881fffb42"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.748421 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"d4c40e1cb54654d35365836985606f93cb5d1f48ded45262366c7bc8eea83e5a"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.750278 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.789455 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.789884 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.790062 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.791028 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.791321 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.792229 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.792350 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.792376 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.792710 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.792765 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.792792 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.792375 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.797169 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.795679 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.797498 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.792991 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.796274 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.796888 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.797145 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.798070 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zpnv\" (UniqueName: \"kubernetes.io/projected/5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68-kube-api-access-7zpnv\") pod \"machine-config-daemon-5dn52\" (UID: \"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68\") " pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.798217 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.798359 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.798482 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.798778 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.797759 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.802864 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.803043 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.803178 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.803452 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.804707 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:11:51.304664969 +0000 UTC m=+94.139914806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.804768 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.804820 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.804856 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.804882 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.804917 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.804968 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805018 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805073 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805146 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805197 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805234 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805277 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805324 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805360 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805402 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805457 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805491 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805523 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805576 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805614 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805642 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805671 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805702 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805743 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805770 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805800 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805830 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805855 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805882 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805913 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805938 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805962 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.805989 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806017 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806044 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806077 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806104 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806155 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806181 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806212 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806244 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806271 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806299 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806328 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806357 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806384 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806416 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806447 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806473 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806503 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806535 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806562 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806587 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806616 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806649 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806690 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806718 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806746 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806772 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806801 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806829 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806861 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806893 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806922 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806952 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.806977 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807006 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807053 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807180 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807215 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807257 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807300 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807449 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807551 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807583 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807613 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807637 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807699 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807759 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807800 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807842 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807867 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807890 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807918 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807943 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.807989 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.808026 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.808062 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.808103 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.808753 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.808785 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.808824 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.809366 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.809313 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.809322 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.809830 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.809844 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.810390 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.810717 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.810743 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.810991 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.811262 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.811300 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.811421 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.811474 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.811939 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.812021 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.812205 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.812347 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.812676 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.812769 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.812913 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813009 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813094 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813197 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813282 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813363 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813455 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813527 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813602 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813776 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813833 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813910 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.814250 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.812899 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.812949 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.812958 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813513 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813624 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813652 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.813904 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.814073 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.814572 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.814557 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.814575 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.814864 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.814929 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.814941 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.815326 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.816422 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.816618 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.816807 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.816871 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.818397 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.816566 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.820333 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.820402 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.820428 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.820554 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.820587 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.815594 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.821003 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.820986 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.817509 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.821665 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.822503 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.822549 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.822820 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.823173 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.823591 5113 scope.go:117] "RemoveContainer" containerID="b77d3f8a0aa844d1d9bce00a4fbc928f96ec80e137240a6c83676a75400166f5" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.823830 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.823935 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.824088 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.824117 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.825296 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.826986 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jvcjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357b225a-0c71-40ba-ac24-d769a9ff3f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz9gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lz9gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jvcjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.827618 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.828165 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.828511 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.828823 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.829570 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.829588 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.829898 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.830532 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.831508 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.832265 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.833371 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.833815 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.834552 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.839954 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.831391 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.859693 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.860142 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.860391 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.860631 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.861838 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.862327 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.862675 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.863513 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.864064 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.864182 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.864450 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.864716 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.865055 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.865915 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.865967 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.865978 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.865998 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.866008 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:50Z","lastTransitionTime":"2025-12-12T14:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.866187 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.866192 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.866289 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.866320 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.866375 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.866396 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.866505 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.866640 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.866666 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.866960 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.866985 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.867475 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.867910 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.867937 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.867955 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.867971 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.867992 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868015 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868058 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868086 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868107 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868146 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868168 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868190 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868213 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868234 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868252 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868272 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868294 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868320 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868341 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868361 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868384 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868410 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868433 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868456 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868475 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868493 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868509 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868527 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868544 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868562 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868578 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868594 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868610 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868631 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868649 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868668 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868688 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868706 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868721 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868739 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868759 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868777 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868820 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868838 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868854 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868872 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868891 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868910 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868925 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868942 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868958 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868975 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.868991 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869009 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869025 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869041 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869075 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869093 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869113 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869191 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869211 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869228 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869247 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869263 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869278 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869296 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869320 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869337 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869353 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869371 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869389 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869405 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869421 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869438 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869456 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869472 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869488 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869504 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869521 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869540 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869556 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869577 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869597 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869613 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869632 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869648 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869667 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869686 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869705 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869724 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869740 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869757 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869774 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869793 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869813 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869840 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869858 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869922 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/043e6bda-e4ff-4fbf-8925-adf929d1af6f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869973 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs\") pod \"network-metrics-daemon-jvcjp\" (UID: \"357b225a-0c71-40ba-ac24-d769a9ff3f07\") " pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.869998 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/043e6bda-e4ff-4fbf-8925-adf929d1af6f-system-cni-dir\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870015 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/043e6bda-e4ff-4fbf-8925-adf929d1af6f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870082 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/043e6bda-e4ff-4fbf-8925-adf929d1af6f-os-release\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870099 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmdt9\" (UniqueName: \"kubernetes.io/projected/043e6bda-e4ff-4fbf-8925-adf929d1af6f-kube-api-access-rmdt9\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870152 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/043e6bda-e4ff-4fbf-8925-adf929d1af6f-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870254 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/043e6bda-e4ff-4fbf-8925-adf929d1af6f-cnibin\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870305 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/043e6bda-e4ff-4fbf-8925-adf929d1af6f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870391 5113 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870402 5113 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870412 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870421 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870430 5113 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870439 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870449 5113 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870457 5113 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870466 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870474 5113 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870484 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870496 5113 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870504 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870514 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870525 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870535 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870545 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870554 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870562 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870571 5113 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870580 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870588 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870599 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870612 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870623 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870638 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870649 5113 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870661 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870700 5113 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870713 5113 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870724 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870735 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870744 5113 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870753 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870762 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870771 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870779 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870789 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870798 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870808 5113 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870819 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870830 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870839 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870848 5113 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870856 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870866 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870875 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870887 5113 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870896 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870904 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870914 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870924 5113 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870934 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870943 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870952 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870961 5113 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870970 5113 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870979 5113 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870987 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.870995 5113 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.871004 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.871014 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.871023 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.871033 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.871042 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.871051 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.871060 5113 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.871069 5113 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.871078 5113 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.871089 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.871099 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.871108 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.871778 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.878065 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.878369 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.878534 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.878779 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.878915 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.879231 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.879586 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.880203 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.880624 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.881017 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.881514 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.881951 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/043e6bda-e4ff-4fbf-8925-adf929d1af6f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.881966 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.882285 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.882536 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.883410 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.885095 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.885505 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.885782 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.886052 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.894076 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.894918 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.894941 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.894999 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895016 5113 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895030 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895041 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895050 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895064 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895075 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895086 5113 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895096 5113 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895106 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895115 5113 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895138 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895148 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895157 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895167 5113 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895177 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895188 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895197 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895197 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895207 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895249 5113 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895263 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895276 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895287 5113 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895297 5113 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895307 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895318 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895329 5113 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895352 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895370 5113 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895386 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895444 5113 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895460 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895539 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895600 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.895967 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.896008 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.896326 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.896351 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.896587 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.896880 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.897173 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.897245 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.897360 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.897743 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.898061 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.898073 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.898260 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.898370 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.898724 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.898745 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.899034 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.899111 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.899298 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.899498 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.899680 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.899732 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.899930 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.900050 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.900346 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.900465 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.900709 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.900972 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.901002 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.901207 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.901280 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.901537 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.901753 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.901838 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.901983 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.901902 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.902296 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.902361 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.902574 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/043e6bda-e4ff-4fbf-8925-adf929d1af6f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.902581 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.902805 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.902936 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.903044 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.903155 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.903392 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.903529 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.903877 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.903971 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.904194 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.904443 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.904861 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.905261 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.905677 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.907498 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.907613 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.907728 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/043e6bda-e4ff-4fbf-8925-adf929d1af6f-os-release\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.907964 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: E1212 14:11:50.908037 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs podName:357b225a-0c71-40ba-ac24-d769a9ff3f07 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:51.908018168 +0000 UTC m=+94.743267995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs") pod "network-metrics-daemon-jvcjp" (UID: "357b225a-0c71-40ba-ac24-d769a9ff3f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.908078 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/043e6bda-e4ff-4fbf-8925-adf929d1af6f-system-cni-dir\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.908155 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/043e6bda-e4ff-4fbf-8925-adf929d1af6f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.908234 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.908533 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.908669 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"461eafb43db7d84753080bf112961e4347a4014f56f6322a9931aca298766d20"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.909166 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.909237 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/043e6bda-e4ff-4fbf-8925-adf929d1af6f-cnibin\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.909429 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.909467 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.909780 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/043e6bda-e4ff-4fbf-8925-adf929d1af6f-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.909934 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.910153 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.909536 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.911668 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hnmf9" event={"ID":"f61630ce-4572-40eb-b245-937168ad79d4","Type":"ContainerStarted","Data":"b3325e00e713b5bf19fcbbbd05ba05bba45d078ad6fa84f50ac8e59e48da7d00"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.912399 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.912697 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7mzm7" event={"ID":"4adfdae7-6f5a-41b1-b923-d57003475a95","Type":"ContainerStarted","Data":"8fcf6a998c8c8121daec8044e5adc291d83a25ab1ab4139cdda9935ca8870d47"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.912726 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7mzm7" event={"ID":"4adfdae7-6f5a-41b1-b923-d57003475a95","Type":"ContainerStarted","Data":"dc28159313c2acbd2f71f76c1dca27cd1331366a72c29918f7881b54919a5ff4"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.929775 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.930657 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.931037 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.931201 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.932929 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.941276 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.941682 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7fc971e-760a-4530-b3b2-7975699b4383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwrnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwrnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-9hc2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.941942 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"be7ad769f2e83aa26ee2ce005a65b02601ef93d135e6b970b104938f1e9bb1fe"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.943940 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.944657 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerStarted","Data":"2ae16c94c73243550c035e4b50ab531c22c8c260a9e248c74642bfaa87abbde8"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.945503 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" event={"ID":"e7fc971e-760a-4530-b3b2-7975699b4383","Type":"ContainerStarted","Data":"7cf40a6b81709e43acc2a332891bb48f8f453561f5b3780f289b8ec58d55e9c6"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.951642 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.952368 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmdt9\" (UniqueName: \"kubernetes.io/projected/043e6bda-e4ff-4fbf-8925-adf929d1af6f-kube-api-access-rmdt9\") pod \"multus-additional-cni-plugins-xzdkb\" (UID: \"043e6bda-e4ff-4fbf-8925-adf929d1af6f\") " pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.953076 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"ff226db24a9b2489978fb399a44743893c9f6d9993ebe742bbbf445bd27edb66"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.954925 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1a146f4-e3da-4081-9515-d087b49e5f3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://31de24bc1eb798fac3083251b82b21b3ddaa03b62cf389cc751edc8edbacad37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://abe78f4493f4d871e29656306a9958d27363fcc43f0a820d66ecbeaf45220e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://00c5bb0f8e7e4ac1050355875ddd3cd32a5385c070d2a32cec02b7f9bf40ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27397e5d42e374524ba5bca68838bc5096d77e381799deae93d4d7a10f32d279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27397e5d42e374524ba5bca68838bc5096d77e381799deae93d4d7a10f32d279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T14:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T14:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:10:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.957946 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.958450 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.958763 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.959373 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.959992 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.960169 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.960188 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.960234 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.962458 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.960277 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.960397 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.960404 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.960454 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.960520 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.960695 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.960868 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.961199 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.962320 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.965318 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.965858 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.969072 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5df458bc-208e-4759-bf11-bd3e478879d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://690e612679d6f018155dde739043394344f531477646d0b549357e8be58c5e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a3faa9196622f095cf411cc0eda711dbe60bf2d7c1dd872fb6909902318f8ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c5dd29b60ac54434d6905e5e2060984558d4ebd1c0dbeb2fc26de7d4edb2350\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://086595c87c1781f7167b3b7e320cd2e025c9b0b2eeb0349797a53b94d5ecf160\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:10:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.969794 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.969851 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.969864 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.969882 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.969893 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:50Z","lastTransitionTime":"2025-12-12T14:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.963112 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.983479 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.984421 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996403 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/66dabae3-7c42-4e9e-807b-faa04aeedc40-serviceca\") pod \"node-ca-gr95v\" (UID: \"66dabae3-7c42-4e9e-807b-faa04aeedc40\") " pod="openshift-image-registry/node-ca-gr95v" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996483 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bzcbg\" (UniqueName: \"kubernetes.io/projected/66dabae3-7c42-4e9e-807b-faa04aeedc40-kube-api-access-bzcbg\") pod \"node-ca-gr95v\" (UID: \"66dabae3-7c42-4e9e-807b-faa04aeedc40\") " pod="openshift-image-registry/node-ca-gr95v" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996629 5113 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996641 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996650 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996660 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996669 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996677 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996706 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996714 5113 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996722 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996732 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996740 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996748 5113 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996756 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996783 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996791 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996800 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996809 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996821 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996829 5113 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996857 5113 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996868 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996877 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996885 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.996894 5113 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.997436 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:50 crc kubenswrapper[5113]: I1212 14:11:50.998290 5113 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.000772 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/66dabae3-7c42-4e9e-807b-faa04aeedc40-serviceca\") pod \"node-ca-gr95v\" (UID: \"66dabae3-7c42-4e9e-807b-faa04aeedc40\") " pod="openshift-image-registry/node-ca-gr95v" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.002312 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003246 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003310 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003324 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003337 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003346 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003357 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003368 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003377 5113 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003385 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003394 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003402 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003411 5113 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003419 5113 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003430 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003439 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003448 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003458 5113 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003466 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003475 5113 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003485 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003493 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003502 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003510 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003518 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003526 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003536 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003544 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003552 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003560 5113 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003568 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003577 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003585 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003593 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003601 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003610 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003620 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003629 5113 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003637 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003645 5113 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003654 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003665 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003674 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003682 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003691 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003699 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003708 5113 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003718 5113 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003727 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003735 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003744 5113 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003752 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003761 5113 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003770 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003780 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003788 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003796 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003805 5113 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003815 5113 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003873 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003883 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003895 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003903 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003911 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003920 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003928 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003937 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003945 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003954 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003962 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003969 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003978 5113 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003987 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.003995 5113 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004003 5113 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004011 5113 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004019 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004028 5113 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004038 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004046 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004056 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004065 5113 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004074 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004082 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004091 5113 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004100 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004108 5113 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004116 5113 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004137 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004146 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.004154 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.005710 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5bd04e2-8efe-48c6-9dfb-1ac5def85888\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://97a86d5b85dcf2115999c68636a544f2f2d3c85ee20b461ed27b2e15640fa556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0009599cb3f5dd74008214ffbbaaecd1838b243f8a8299a307f3d5114800365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8bf923968a6c879c13907d07b73c554d316b86acb88d866e3fa0a772a4fee32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c822d500a3b37a2d55f9c587c4f369ef9bcb10fe8265a360b7c48548574da69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc48af6a76044e7783015f4b054d5951ab72fd2e2ce7bd8fa050ec3b16fd7ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2289409f704de09fd5046d240291927dcfd1d0bd2e405d7a5c774b5d3f60be7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2289409f704de09fd5046d240291927dcfd1d0bd2e405d7a5c774b5d3f60be7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T14:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T14:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://358a32c664caec74b7afc5ba497da5182c9ad7295f248c6622b51655b89b3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://358a32c664caec74b7afc5ba497da5182c9ad7295f248c6622b51655b89b3d35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T14:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T14:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fc64ca267a6f3f76f661088ad8bfeac5b5b49f861206788e60f537c78a07cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fc64ca267a6f3f76f661088ad8bfeac5b5b49f861206788e60f537c78a07cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T14:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T14:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:10:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.006174 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzcbg\" (UniqueName: \"kubernetes.io/projected/66dabae3-7c42-4e9e-807b-faa04aeedc40-kube-api-access-bzcbg\") pod \"node-ca-gr95v\" (UID: \"66dabae3-7c42-4e9e-807b-faa04aeedc40\") " pod="openshift-image-registry/node-ca-gr95v" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.038566 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5336457e-56b8-4455-9cbb-388bab880a59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ae59a492a24b7e319fb6e2535bd395840e015bc16f625d44a75bc0d8b996b8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b580f18ad4f07a1213ec639cdb9df787c5ef723b26eded55ee758cb6f9f62cb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://206f507a6fdf88d3e1d29676b4a01b01c2d876ce2806953af724790345c9e763\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b77d3f8a0aa844d1d9bce00a4fbc928f96ec80e137240a6c83676a75400166f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b77d3f8a0aa844d1d9bce00a4fbc928f96ec80e137240a6c83676a75400166f5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T14:11:26Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1212 14:11:25.406025 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 14:11:25.406232 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 14:11:25.407336 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-872291724/tls.crt::/tmp/serving-cert-872291724/tls.key\\\\\\\"\\\\nI1212 14:11:26.074593 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 14:11:26.077110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 14:11:26.077156 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 14:11:26.077199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 14:11:26.077210 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 14:11:26.084078 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1212 14:11:26.084102 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 14:11:26.084109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 14:11:26.084146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 14:11:26.084158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 14:11:26.084165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 14:11:26.084171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 14:11:26.084179 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 14:11:26.085318 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T14:11:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4dcb1f921c832112e5a3717359d76d330218be7e53f95c41d75b5738ce073c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ccb7f9b929e828dd8220bfa92ffca05f4a2a78a97f17a86787dcf729ff4feafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb7f9b929e828dd8220bfa92ffca05f4a2a78a97f17a86787dcf729ff4feafc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T14:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T14:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:10:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.042636 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gr95v" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.071962 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.072012 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.072025 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.072038 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.072048 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:51Z","lastTransitionTime":"2025-12-12T14:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.074786 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.104895 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.104979 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.105013 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.105084 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.105109 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:53.105088434 +0000 UTC m=+95.940338271 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.105172 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:53.105157216 +0000 UTC m=+95.940407043 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.113995 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.156337 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-hnmf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f61630ce-4572-40eb-b245-937168ad79d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc7vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hnmf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.174577 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.174621 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.174631 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.174647 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.174658 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:51Z","lastTransitionTime":"2025-12-12T14:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.201159 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.205786 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.205912 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.206364 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.206420 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.206445 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.206361 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.206590 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.206613 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.206544 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:53.20650772 +0000 UTC m=+96.041757587 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.206721 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:53.206692967 +0000 UTC m=+96.041942864 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.236430 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.274962 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.276796 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.276869 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.276889 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.276914 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.276932 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:51Z","lastTransitionTime":"2025-12-12T14:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.307208 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.307836 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:11:52.307782643 +0000 UTC m=+95.143032520 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.313929 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gr95v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66dabae3-7c42-4e9e-807b-faa04aeedc40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzcbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gr95v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.356145 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:11:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zpnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zpnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:11:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dn52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.378405 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.378444 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.378453 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.378466 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.378476 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:51Z","lastTransitionTime":"2025-12-12T14:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.396747 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac6364c-acdc-4cd6-a368-279989a5b439\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbf84d3d456666ce03f4248e30f60ecaca28ffb59032e960ed930a9f53e50549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T14:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bef0de3fac9d6c6f6bbedd63d19310499aa7c56d41cc410fdc1050382979f88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bef0de3fac9d6c6f6bbedd63d19310499aa7c56d41cc410fdc1050382979f88e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T14:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T14:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T14:10:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.490355 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.490510 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.490868 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.490990 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.491293 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.491393 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.492261 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.492298 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.492311 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.492328 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.492340 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:51Z","lastTransitionTime":"2025-12-12T14:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.495273 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.496157 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.497213 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.498861 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.501092 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.502568 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.503840 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.505102 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.505687 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.506950 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.507819 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.509473 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.510102 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.511558 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.512040 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.512825 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.513920 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.514868 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.516163 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.517011 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.517845 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.519500 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.520519 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.521389 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.522571 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.523454 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.581805 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.582842 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.607812 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.607855 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.607863 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.607876 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.607886 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:51Z","lastTransitionTime":"2025-12-12T14:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.611800 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7mzm7" podStartSLOduration=73.611784224 podStartE2EDuration="1m13.611784224s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:11:51.477285795 +0000 UTC m=+94.312535632" watchObservedRunningTime="2025-12-12 14:11:51.611784224 +0000 UTC m=+94.447034061" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.617485 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.619398 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.621976 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.623697 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.710659 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.710730 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.710744 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.710764 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.710797 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:51Z","lastTransitionTime":"2025-12-12T14:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.719518 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.719498347 podStartE2EDuration="2.719498347s" podCreationTimestamp="2025-12-12 14:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:11:51.719166246 +0000 UTC m=+94.554416083" watchObservedRunningTime="2025-12-12 14:11:51.719498347 +0000 UTC m=+94.554748174" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.812742 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.812801 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.812816 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.812835 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.812847 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:51Z","lastTransitionTime":"2025-12-12T14:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.828671 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.848087 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.849070 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.850284 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.850985 5113 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.851087 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.915104 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.915149 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.915159 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.915175 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.915185 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:51Z","lastTransitionTime":"2025-12-12T14:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.920098 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs\") pod \"network-metrics-daemon-jvcjp\" (UID: \"357b225a-0c71-40ba-ac24-d769a9ff3f07\") " pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.939056 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:11:51 crc kubenswrapper[5113]: E1212 14:11:51.939929 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs podName:357b225a-0c71-40ba-ac24-d769a9ff3f07 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:53.939907005 +0000 UTC m=+96.775156832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs") pod "network-metrics-daemon-jvcjp" (UID: "357b225a-0c71-40ba-ac24-d769a9ff3f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:11:51 crc kubenswrapper[5113]: I1212 14:11:51.944696 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.017898 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.017941 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.017953 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.017989 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.018003 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:52Z","lastTransitionTime":"2025-12-12T14:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.078447 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=3.078421964 podStartE2EDuration="3.078421964s" podCreationTimestamp="2025-12-12 14:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:11:52.077460373 +0000 UTC m=+94.912710220" watchObservedRunningTime="2025-12-12 14:11:52.078421964 +0000 UTC m=+94.913671801" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.119671 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.119725 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.119736 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.119758 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.119771 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:52Z","lastTransitionTime":"2025-12-12T14:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.157697 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.159034 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.160977 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.161884 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.163749 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.164765 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.165845 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.166874 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.168427 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.169438 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.170865 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.174192 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.174925 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.176050 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.177365 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.178624 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.179719 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.180539 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.181648 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.182428 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"81093dd0ce02c961d9804edb14d79e9da130e839b13b26ca274b59b365da4fab"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.182460 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hnmf9" event={"ID":"f61630ce-4572-40eb-b245-937168ad79d4","Type":"ContainerStarted","Data":"1a9d9ed7a0744ebcd9ff37c8fac9b74ab9642f0b437e4772adf7a2837e766b3b"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.182487 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerStarted","Data":"4a3efe72e0f387f4cd7268a568aca8a828923df9591740fe92f5adc14be636a4"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.182496 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" event={"ID":"e7fc971e-760a-4530-b3b2-7975699b4383","Type":"ContainerStarted","Data":"4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.182517 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" event={"ID":"043e6bda-e4ff-4fbf-8925-adf929d1af6f","Type":"ContainerStarted","Data":"ca393881b979c9a06696d21bcd60284dd3526d934b6831a75ec285488c0a22c9"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.182526 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gr95v" event={"ID":"66dabae3-7c42-4e9e-807b-faa04aeedc40","Type":"ContainerStarted","Data":"1fcbf19cff0a5ccd6362ea82f77eb17eba06c1eff11fbfd8e0a2b85c0daea92c"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.182557 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerStarted","Data":"2d6235d8d3819706d3681b11dac0ad1065edd7e492a7056726c5e7e539582baf"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.211224 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=3.2112038370000002 podStartE2EDuration="3.211203837s" podCreationTimestamp="2025-12-12 14:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:11:52.120114187 +0000 UTC m=+94.955364034" watchObservedRunningTime="2025-12-12 14:11:52.211203837 +0000 UTC m=+95.046453664" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.222003 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.222325 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.222338 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.222353 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.222363 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:52Z","lastTransitionTime":"2025-12-12T14:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.342767 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:11:52 crc kubenswrapper[5113]: E1212 14:11:52.343002 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:11:54.342966926 +0000 UTC m=+97.178216773 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.406270 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.406317 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.406331 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.406350 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.406374 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:52Z","lastTransitionTime":"2025-12-12T14:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.482384 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:11:52 crc kubenswrapper[5113]: E1212 14:11:52.482509 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.587461 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.587509 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.587522 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.587541 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.587553 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:52Z","lastTransitionTime":"2025-12-12T14:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.689360 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.689414 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.689431 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.689454 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.689472 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:52Z","lastTransitionTime":"2025-12-12T14:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.791688 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.791738 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.791747 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.791762 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.791772 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:52Z","lastTransitionTime":"2025-12-12T14:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.894184 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.894225 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.894237 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.894251 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.894260 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:52Z","lastTransitionTime":"2025-12-12T14:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.970277 5113 generic.go:358] "Generic (PLEG): container finished" podID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerID="4a3efe72e0f387f4cd7268a568aca8a828923df9591740fe92f5adc14be636a4" exitCode=0 Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.970403 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerDied","Data":"4a3efe72e0f387f4cd7268a568aca8a828923df9591740fe92f5adc14be636a4"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.972494 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerStarted","Data":"0e7e34f9b9a8d598a4a4dc4523ac4110df25d8397f7991be7de07cc59bc98748"} Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.993937 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=3.993920383 podStartE2EDuration="3.993920383s" podCreationTimestamp="2025-12-12 14:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:11:52.212041654 +0000 UTC m=+95.047291491" watchObservedRunningTime="2025-12-12 14:11:52.993920383 +0000 UTC m=+95.829170210" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.996740 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.996769 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.996777 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.996789 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:52 crc kubenswrapper[5113]: I1212 14:11:52.996797 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:52Z","lastTransitionTime":"2025-12-12T14:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.098609 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.098652 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.098661 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.098677 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.098690 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:53Z","lastTransitionTime":"2025-12-12T14:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.172849 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.172919 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.173003 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.173017 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.173075 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:57.173054451 +0000 UTC m=+100.008304288 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.173096 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:57.173089222 +0000 UTC m=+100.008339049 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.200689 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.200727 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.200736 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.200750 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.200762 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:53Z","lastTransitionTime":"2025-12-12T14:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.274007 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.274159 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.274349 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.274375 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.274393 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.274469 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:57.274447107 +0000 UTC m=+100.109696974 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.274961 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.274991 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.275009 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.275060 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:57.275044677 +0000 UTC m=+100.110294534 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.304693 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.304863 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.304893 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.304970 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.304991 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:53Z","lastTransitionTime":"2025-12-12T14:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.422691 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.422743 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.422760 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.422784 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.422800 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:53Z","lastTransitionTime":"2025-12-12T14:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.483947 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.484167 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.484697 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.484814 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.484903 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.485011 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.526022 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.526078 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.526091 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.526109 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.526139 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:53Z","lastTransitionTime":"2025-12-12T14:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.627995 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.628164 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.628191 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.628212 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.628227 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:53Z","lastTransitionTime":"2025-12-12T14:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.730825 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.730878 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.730902 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.730920 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.730930 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:53Z","lastTransitionTime":"2025-12-12T14:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.833198 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.833258 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.833269 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.833283 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.833295 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:53Z","lastTransitionTime":"2025-12-12T14:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.936074 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.936170 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.936189 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.936213 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.936233 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:53Z","lastTransitionTime":"2025-12-12T14:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.977013 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"297159585a2d1f5b098dda2b8c61d98fb65fe35cee7727d85a11fd3f30091ca0"} Dec 12 14:11:53 crc kubenswrapper[5113]: I1212 14:11:53.985646 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs\") pod \"network-metrics-daemon-jvcjp\" (UID: \"357b225a-0c71-40ba-ac24-d769a9ff3f07\") " pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.985788 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:11:53 crc kubenswrapper[5113]: E1212 14:11:53.985885 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs podName:357b225a-0c71-40ba-ac24-d769a9ff3f07 nodeName:}" failed. No retries permitted until 2025-12-12 14:11:57.985861793 +0000 UTC m=+100.821111680 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs") pod "network-metrics-daemon-jvcjp" (UID: "357b225a-0c71-40ba-ac24-d769a9ff3f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.038480 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.038597 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.038651 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.038678 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.038695 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:54Z","lastTransitionTime":"2025-12-12T14:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.140812 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.140880 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.140894 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.140912 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.140944 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:54Z","lastTransitionTime":"2025-12-12T14:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.244531 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.244575 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.244587 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.244605 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.244617 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:54Z","lastTransitionTime":"2025-12-12T14:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.346581 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.346625 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.346638 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.346655 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.346666 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:54Z","lastTransitionTime":"2025-12-12T14:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.422490 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:11:54 crc kubenswrapper[5113]: E1212 14:11:54.422736 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:11:58.422704128 +0000 UTC m=+101.257953955 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.453897 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.453934 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.453943 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.453956 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.453965 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:54Z","lastTransitionTime":"2025-12-12T14:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.481963 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:11:54 crc kubenswrapper[5113]: E1212 14:11:54.482171 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.557256 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.557300 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.557312 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.557331 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.557342 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:54Z","lastTransitionTime":"2025-12-12T14:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.659678 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.659722 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.659732 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.659746 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.659755 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:54Z","lastTransitionTime":"2025-12-12T14:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.761353 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.761394 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.761404 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.761417 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.761430 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:54Z","lastTransitionTime":"2025-12-12T14:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.863503 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.863550 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.863563 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.863579 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.863592 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:54Z","lastTransitionTime":"2025-12-12T14:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.965898 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.965976 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.965987 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.966004 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.966025 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:54Z","lastTransitionTime":"2025-12-12T14:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.984289 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerStarted","Data":"ceba849bd0445b0fb2ea2c10866e8c04cdb72059e168a7aa59f86db129ad709b"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.985884 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" event={"ID":"e7fc971e-760a-4530-b3b2-7975699b4383","Type":"ContainerStarted","Data":"eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.987192 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" event={"ID":"043e6bda-e4ff-4fbf-8925-adf929d1af6f","Type":"ContainerStarted","Data":"c2c131e3f6c1b4e8c506db7f65d840646a7192f461cae8c1a9fad87a12368641"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.988489 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gr95v" event={"ID":"66dabae3-7c42-4e9e-807b-faa04aeedc40","Type":"ContainerStarted","Data":"12ffd96e3a08b097b52288e602f747ad3008de6802bb95a4233379d64669da71"} Dec 12 14:11:54 crc kubenswrapper[5113]: I1212 14:11:54.990290 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerStarted","Data":"88059176e1e232d8c70292c99b2bf5ba9ecdadd2fddbc2b573e43056ed1bcb16"} Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.013098 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hnmf9" podStartSLOduration=77.013078595 podStartE2EDuration="1m17.013078595s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:11:54.325233581 +0000 UTC m=+97.160483428" watchObservedRunningTime="2025-12-12 14:11:55.013078595 +0000 UTC m=+97.848328422" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.042594 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" podStartSLOduration=77.04257463 podStartE2EDuration="1m17.04257463s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:11:55.04165545 +0000 UTC m=+97.876905297" watchObservedRunningTime="2025-12-12 14:11:55.04257463 +0000 UTC m=+97.877824457" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.078789 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.078844 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.078858 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.078880 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.078891 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:55Z","lastTransitionTime":"2025-12-12T14:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.093366 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gr95v" podStartSLOduration=77.09334488 podStartE2EDuration="1m17.09334488s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:11:55.073834732 +0000 UTC m=+97.909084579" watchObservedRunningTime="2025-12-12 14:11:55.09334488 +0000 UTC m=+97.928594707" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.093492 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podStartSLOduration=77.093488745 podStartE2EDuration="1m17.093488745s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:11:55.093309849 +0000 UTC m=+97.928559706" watchObservedRunningTime="2025-12-12 14:11:55.093488745 +0000 UTC m=+97.928738572" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.181083 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.181141 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.181152 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.181165 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.181174 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:55Z","lastTransitionTime":"2025-12-12T14:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.282780 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.282826 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.282838 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.282854 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.282863 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:55Z","lastTransitionTime":"2025-12-12T14:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.385224 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.385543 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.385554 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.385570 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.385586 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:55Z","lastTransitionTime":"2025-12-12T14:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.482171 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:11:55 crc kubenswrapper[5113]: E1212 14:11:55.482411 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.482842 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:55 crc kubenswrapper[5113]: E1212 14:11:55.482927 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.483107 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:55 crc kubenswrapper[5113]: E1212 14:11:55.483182 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.487146 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.487197 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.487257 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.487277 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.487308 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:55Z","lastTransitionTime":"2025-12-12T14:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.589608 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.589655 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.589668 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.589683 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.589695 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:55Z","lastTransitionTime":"2025-12-12T14:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.692460 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.692511 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.692521 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.692537 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.692549 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:55Z","lastTransitionTime":"2025-12-12T14:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.795174 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.795841 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.795940 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.796041 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.796144 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:55Z","lastTransitionTime":"2025-12-12T14:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.905882 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.905921 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.905932 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.905945 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:55 crc kubenswrapper[5113]: I1212 14:11:55.905954 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:55Z","lastTransitionTime":"2025-12-12T14:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.003585 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerStarted","Data":"aeb49bc4fec2311eb34b420f5efcaca8b3f5e898cf18eff29bd8d356a441ba52"} Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.003653 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerStarted","Data":"0f6cbc907d43156329193d6afe6cbbf8751a75292df13623820b8e45fe47be0b"} Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.007902 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.008181 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.008311 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.008413 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.008507 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:56Z","lastTransitionTime":"2025-12-12T14:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.008214 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" event={"ID":"043e6bda-e4ff-4fbf-8925-adf929d1af6f","Type":"ContainerDied","Data":"c2c131e3f6c1b4e8c506db7f65d840646a7192f461cae8c1a9fad87a12368641"} Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.008164 5113 generic.go:358] "Generic (PLEG): container finished" podID="043e6bda-e4ff-4fbf-8925-adf929d1af6f" containerID="c2c131e3f6c1b4e8c506db7f65d840646a7192f461cae8c1a9fad87a12368641" exitCode=0 Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.111628 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.111888 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.111991 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.112112 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.112224 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:56Z","lastTransitionTime":"2025-12-12T14:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.214413 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.214469 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.214484 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.214503 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.214516 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:56Z","lastTransitionTime":"2025-12-12T14:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.316266 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.316314 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.316326 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.316343 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.316355 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:56Z","lastTransitionTime":"2025-12-12T14:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.427089 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.427153 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.427165 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.427179 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.427189 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:56Z","lastTransitionTime":"2025-12-12T14:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.482564 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:11:56 crc kubenswrapper[5113]: E1212 14:11:56.482710 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.529149 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.529191 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.529201 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.529214 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.529225 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:56Z","lastTransitionTime":"2025-12-12T14:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.631243 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.631282 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.631307 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.631322 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.631335 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:56Z","lastTransitionTime":"2025-12-12T14:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.733767 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.733845 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.733856 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.733870 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.733881 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:56Z","lastTransitionTime":"2025-12-12T14:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.835812 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.836628 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.836699 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.836767 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.836838 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:56Z","lastTransitionTime":"2025-12-12T14:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.938927 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.938970 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.938982 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.939024 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:56 crc kubenswrapper[5113]: I1212 14:11:56.939036 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:56Z","lastTransitionTime":"2025-12-12T14:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.019980 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerStarted","Data":"ecbd97312a89db090730ecb5c8d7d34608a6d785113daf82cfd6cad10384efcc"} Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.042031 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.042097 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.042110 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.042146 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.042160 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:57Z","lastTransitionTime":"2025-12-12T14:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.144941 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.144989 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.145017 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.145040 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.145050 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:57Z","lastTransitionTime":"2025-12-12T14:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.239434 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.239584 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.239694 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.239762 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.239830 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:05.239798665 +0000 UTC m=+108.075048532 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.239867 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:05.239852967 +0000 UTC m=+108.075102834 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.249109 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.249176 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.249190 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.249209 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.249222 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:57Z","lastTransitionTime":"2025-12-12T14:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.340818 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.340927 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.341069 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.341092 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.341113 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.341142 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.341160 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.341205 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:05.341187041 +0000 UTC m=+108.176436868 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.341209 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.341326 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:05.341290714 +0000 UTC m=+108.176540581 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.351678 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.351726 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.351738 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.351759 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.351771 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:57Z","lastTransitionTime":"2025-12-12T14:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.454315 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.454362 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.454374 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.454391 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.454402 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:57Z","lastTransitionTime":"2025-12-12T14:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.483247 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.483442 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.483451 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.483478 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.483542 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:11:57 crc kubenswrapper[5113]: E1212 14:11:57.483610 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.556520 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.556575 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.556587 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.556603 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.556615 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:57Z","lastTransitionTime":"2025-12-12T14:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.658394 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.658455 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.658468 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.658486 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.658502 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:57Z","lastTransitionTime":"2025-12-12T14:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.760372 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.760613 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.760699 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.760790 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.760872 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:57Z","lastTransitionTime":"2025-12-12T14:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.863574 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.863660 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.863680 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.863705 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.863738 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:57Z","lastTransitionTime":"2025-12-12T14:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.965515 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.965809 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.965882 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.965951 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:57 crc kubenswrapper[5113]: I1212 14:11:57.966077 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:57Z","lastTransitionTime":"2025-12-12T14:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.047684 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs\") pod \"network-metrics-daemon-jvcjp\" (UID: \"357b225a-0c71-40ba-ac24-d769a9ff3f07\") " pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:58 crc kubenswrapper[5113]: E1212 14:11:58.047835 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:11:58 crc kubenswrapper[5113]: E1212 14:11:58.047910 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs podName:357b225a-0c71-40ba-ac24-d769a9ff3f07 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:06.047890562 +0000 UTC m=+108.883140389 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs") pod "network-metrics-daemon-jvcjp" (UID: "357b225a-0c71-40ba-ac24-d769a9ff3f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.068187 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.068245 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.068258 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.068276 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.068289 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:58Z","lastTransitionTime":"2025-12-12T14:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.170281 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.170343 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.170357 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.170376 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.170390 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:58Z","lastTransitionTime":"2025-12-12T14:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.273207 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.273306 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.273332 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.273359 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.273379 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:58Z","lastTransitionTime":"2025-12-12T14:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.374948 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.375215 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.375229 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.375257 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.375269 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:58Z","lastTransitionTime":"2025-12-12T14:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.453021 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:11:58 crc kubenswrapper[5113]: E1212 14:11:58.453401 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:06.453371252 +0000 UTC m=+109.288621079 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.560458 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:11:58 crc kubenswrapper[5113]: E1212 14:11:58.560662 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.561508 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.561546 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.561557 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.561571 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.561580 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:58Z","lastTransitionTime":"2025-12-12T14:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.663186 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.663234 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.663247 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.663266 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.663280 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:58Z","lastTransitionTime":"2025-12-12T14:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.771370 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.771424 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.771434 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.771449 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.771463 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:58Z","lastTransitionTime":"2025-12-12T14:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.874484 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.874538 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.874550 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.874567 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.874578 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:58Z","lastTransitionTime":"2025-12-12T14:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.977072 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.977159 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.977174 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.977191 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:58 crc kubenswrapper[5113]: I1212 14:11:58.977203 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:58Z","lastTransitionTime":"2025-12-12T14:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.029598 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerStarted","Data":"898e119131ccbd53d3b08653df43ed219b33f88ba94eed34e8afda71f84a7b81"} Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.031628 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" event={"ID":"043e6bda-e4ff-4fbf-8925-adf929d1af6f","Type":"ContainerStarted","Data":"6f14c1a581bdfb4054556bbf181912ddf1fc2110b4e81a3303247452a3a2cc35"} Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.079746 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.079813 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.079827 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.079850 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.079864 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:59Z","lastTransitionTime":"2025-12-12T14:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.182175 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.182229 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.182242 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.182260 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.182273 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:59Z","lastTransitionTime":"2025-12-12T14:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.358436 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.358502 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.358516 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.358535 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.358547 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:59Z","lastTransitionTime":"2025-12-12T14:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.464086 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.464154 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.464167 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.464183 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.464193 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:59Z","lastTransitionTime":"2025-12-12T14:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.482393 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:11:59 crc kubenswrapper[5113]: E1212 14:11:59.482532 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.482541 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:11:59 crc kubenswrapper[5113]: E1212 14:11:59.482683 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.482753 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:11:59 crc kubenswrapper[5113]: E1212 14:11:59.482819 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.566976 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.567023 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.567037 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.567059 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.567075 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:59Z","lastTransitionTime":"2025-12-12T14:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.669452 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.669501 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.669514 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.669529 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.669541 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:59Z","lastTransitionTime":"2025-12-12T14:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.772041 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.772089 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.772102 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.772147 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.772159 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:59Z","lastTransitionTime":"2025-12-12T14:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.875069 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.875135 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.875153 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.875173 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:11:59 crc kubenswrapper[5113]: I1212 14:11:59.875187 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:11:59Z","lastTransitionTime":"2025-12-12T14:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.068388 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.068449 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.068460 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.068479 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.068490 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:00Z","lastTransitionTime":"2025-12-12T14:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.069611 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.069676 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.069690 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.069711 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.069726 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T14:12:00Z","lastTransitionTime":"2025-12-12T14:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.072890 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerStarted","Data":"a0fbb92681c46a42102eed9649761e19acecf2f81db190c81cd551fd57a33f7b"} Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.115041 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr"] Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.139511 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.141874 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.142103 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.142310 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.142440 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.178346 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1a9bafe8-9087-4499-bb10-c2e804fcc0db-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.178417 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a9bafe8-9087-4499-bb10-c2e804fcc0db-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.178441 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a9bafe8-9087-4499-bb10-c2e804fcc0db-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.178607 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1a9bafe8-9087-4499-bb10-c2e804fcc0db-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.178682 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a9bafe8-9087-4499-bb10-c2e804fcc0db-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.279428 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1a9bafe8-9087-4499-bb10-c2e804fcc0db-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.279482 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1a9bafe8-9087-4499-bb10-c2e804fcc0db-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.279553 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a9bafe8-9087-4499-bb10-c2e804fcc0db-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.279582 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a9bafe8-9087-4499-bb10-c2e804fcc0db-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.279645 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1a9bafe8-9087-4499-bb10-c2e804fcc0db-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.279677 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a9bafe8-9087-4499-bb10-c2e804fcc0db-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.281453 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a9bafe8-9087-4499-bb10-c2e804fcc0db-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.281521 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1a9bafe8-9087-4499-bb10-c2e804fcc0db-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.286780 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a9bafe8-9087-4499-bb10-c2e804fcc0db-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.304951 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a9bafe8-9087-4499-bb10-c2e804fcc0db-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qjnbr\" (UID: \"1a9bafe8-9087-4499-bb10-c2e804fcc0db\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.481788 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:00 crc kubenswrapper[5113]: E1212 14:12:00.481956 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.510580 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.561684 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 14:12:00 crc kubenswrapper[5113]: I1212 14:12:00.569622 5113 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 14:12:01 crc kubenswrapper[5113]: I1212 14:12:01.077710 5113 generic.go:358] "Generic (PLEG): container finished" podID="043e6bda-e4ff-4fbf-8925-adf929d1af6f" containerID="6f14c1a581bdfb4054556bbf181912ddf1fc2110b4e81a3303247452a3a2cc35" exitCode=0 Dec 12 14:12:01 crc kubenswrapper[5113]: I1212 14:12:01.077814 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" event={"ID":"043e6bda-e4ff-4fbf-8925-adf929d1af6f","Type":"ContainerDied","Data":"6f14c1a581bdfb4054556bbf181912ddf1fc2110b4e81a3303247452a3a2cc35"} Dec 12 14:12:01 crc kubenswrapper[5113]: I1212 14:12:01.080563 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" event={"ID":"1a9bafe8-9087-4499-bb10-c2e804fcc0db","Type":"ContainerStarted","Data":"7ea4ac0ba80bc0b90c763bfb03fca96a06171434393ece02f7cb2c4f162bc5bb"} Dec 12 14:12:01 crc kubenswrapper[5113]: I1212 14:12:01.080613 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" event={"ID":"1a9bafe8-9087-4499-bb10-c2e804fcc0db","Type":"ContainerStarted","Data":"f7068f23eb8e0e4a6226c68e3f52e23fcfb52c627fd0f762d3814db4d7002327"} Dec 12 14:12:01 crc kubenswrapper[5113]: I1212 14:12:01.122482 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qjnbr" podStartSLOduration=83.122458938 podStartE2EDuration="1m23.122458938s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:01.121854118 +0000 UTC m=+103.957103965" watchObservedRunningTime="2025-12-12 14:12:01.122458938 +0000 UTC m=+103.957708765" Dec 12 14:12:01 crc kubenswrapper[5113]: I1212 14:12:01.482355 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:01 crc kubenswrapper[5113]: E1212 14:12:01.482765 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:01 crc kubenswrapper[5113]: I1212 14:12:01.482604 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:12:01 crc kubenswrapper[5113]: E1212 14:12:01.482858 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:12:01 crc kubenswrapper[5113]: I1212 14:12:01.482409 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:01 crc kubenswrapper[5113]: E1212 14:12:01.482927 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:02 crc kubenswrapper[5113]: I1212 14:12:02.086044 5113 generic.go:358] "Generic (PLEG): container finished" podID="043e6bda-e4ff-4fbf-8925-adf929d1af6f" containerID="b6d81abbc5bde724d805539d7661501b78f5279565c7aa9f5e7a89b74ac0fce1" exitCode=0 Dec 12 14:12:02 crc kubenswrapper[5113]: I1212 14:12:02.086105 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" event={"ID":"043e6bda-e4ff-4fbf-8925-adf929d1af6f","Type":"ContainerDied","Data":"b6d81abbc5bde724d805539d7661501b78f5279565c7aa9f5e7a89b74ac0fce1"} Dec 12 14:12:02 crc kubenswrapper[5113]: I1212 14:12:02.481719 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:02 crc kubenswrapper[5113]: E1212 14:12:02.481842 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:03 crc kubenswrapper[5113]: I1212 14:12:03.092248 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerStarted","Data":"16651a31482f1c0cd6fe233f3183f63d8e017c1471ce1d233cf2547ae35770a0"} Dec 12 14:12:03 crc kubenswrapper[5113]: I1212 14:12:03.093999 5113 generic.go:358] "Generic (PLEG): container finished" podID="043e6bda-e4ff-4fbf-8925-adf929d1af6f" containerID="3215e2b97562aa3809d893d7317eb4d909dd41119789dd73896d26b636c2aff1" exitCode=0 Dec 12 14:12:03 crc kubenswrapper[5113]: I1212 14:12:03.094046 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" event={"ID":"043e6bda-e4ff-4fbf-8925-adf929d1af6f","Type":"ContainerDied","Data":"3215e2b97562aa3809d893d7317eb4d909dd41119789dd73896d26b636c2aff1"} Dec 12 14:12:03 crc kubenswrapper[5113]: I1212 14:12:03.486101 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:03 crc kubenswrapper[5113]: E1212 14:12:03.486295 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:03 crc kubenswrapper[5113]: I1212 14:12:03.486788 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:03 crc kubenswrapper[5113]: E1212 14:12:03.486851 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:03 crc kubenswrapper[5113]: I1212 14:12:03.486911 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:12:03 crc kubenswrapper[5113]: E1212 14:12:03.486967 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:12:04 crc kubenswrapper[5113]: I1212 14:12:04.109399 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" event={"ID":"043e6bda-e4ff-4fbf-8925-adf929d1af6f","Type":"ContainerStarted","Data":"e1aad3654956f6edc266f46b2e8afbe457c56b554633eb55e505bd64a87f79d7"} Dec 12 14:12:04 crc kubenswrapper[5113]: I1212 14:12:04.482550 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:04 crc kubenswrapper[5113]: E1212 14:12:04.482919 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:05 crc kubenswrapper[5113]: I1212 14:12:05.002340 5113 scope.go:117] "RemoveContainer" containerID="b77d3f8a0aa844d1d9bce00a4fbc928f96ec80e137240a6c83676a75400166f5" Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.002556 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 14:12:05 crc kubenswrapper[5113]: I1212 14:12:05.240350 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:05 crc kubenswrapper[5113]: I1212 14:12:05.240406 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.240432 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.240498 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.240482258 +0000 UTC m=+124.075732085 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.240508 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.240540 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.24053183 +0000 UTC m=+124.075781657 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 14:12:05 crc kubenswrapper[5113]: I1212 14:12:05.341901 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:05 crc kubenswrapper[5113]: I1212 14:12:05.342015 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.342142 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.342174 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.342185 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.342248 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.342231346 +0000 UTC m=+124.177481253 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.342272 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.342321 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.342335 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.342425 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.342409981 +0000 UTC m=+124.177659808 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 14:12:05 crc kubenswrapper[5113]: I1212 14:12:05.482777 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:05 crc kubenswrapper[5113]: I1212 14:12:05.482824 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.482923 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.483317 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:05 crc kubenswrapper[5113]: I1212 14:12:05.483358 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:12:05 crc kubenswrapper[5113]: E1212 14:12:05.483418 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:12:06 crc kubenswrapper[5113]: I1212 14:12:06.048182 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs\") pod \"network-metrics-daemon-jvcjp\" (UID: \"357b225a-0c71-40ba-ac24-d769a9ff3f07\") " pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:12:06 crc kubenswrapper[5113]: E1212 14:12:06.048370 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:12:06 crc kubenswrapper[5113]: E1212 14:12:06.048604 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs podName:357b225a-0c71-40ba-ac24-d769a9ff3f07 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:22.048582695 +0000 UTC m=+124.883832522 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs") pod "network-metrics-daemon-jvcjp" (UID: "357b225a-0c71-40ba-ac24-d769a9ff3f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 14:12:06 crc kubenswrapper[5113]: I1212 14:12:06.119775 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerStarted","Data":"9237fb1c3f84937b9ed5ae24b5cd152e156c207d984f30022dbc059ffce50820"} Dec 12 14:12:06 crc kubenswrapper[5113]: I1212 14:12:06.120941 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:12:06 crc kubenswrapper[5113]: I1212 14:12:06.120964 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:12:06 crc kubenswrapper[5113]: I1212 14:12:06.121010 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:12:06 crc kubenswrapper[5113]: I1212 14:12:06.331873 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:12:06 crc kubenswrapper[5113]: I1212 14:12:06.332418 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:12:06 crc kubenswrapper[5113]: I1212 14:12:06.374978 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" podStartSLOduration=88.374962489 podStartE2EDuration="1m28.374962489s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:06.199401747 +0000 UTC m=+109.034651594" watchObservedRunningTime="2025-12-12 14:12:06.374962489 +0000 UTC m=+109.210212316" Dec 12 14:12:06 crc kubenswrapper[5113]: I1212 14:12:06.482000 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:06 crc kubenswrapper[5113]: E1212 14:12:06.482163 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:06 crc kubenswrapper[5113]: I1212 14:12:06.554415 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:06 crc kubenswrapper[5113]: E1212 14:12:06.554635 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:22.554594762 +0000 UTC m=+125.389844629 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:07 crc kubenswrapper[5113]: I1212 14:12:07.124457 5113 generic.go:358] "Generic (PLEG): container finished" podID="043e6bda-e4ff-4fbf-8925-adf929d1af6f" containerID="e1aad3654956f6edc266f46b2e8afbe457c56b554633eb55e505bd64a87f79d7" exitCode=0 Dec 12 14:12:07 crc kubenswrapper[5113]: I1212 14:12:07.124516 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" event={"ID":"043e6bda-e4ff-4fbf-8925-adf929d1af6f","Type":"ContainerDied","Data":"e1aad3654956f6edc266f46b2e8afbe457c56b554633eb55e505bd64a87f79d7"} Dec 12 14:12:07 crc kubenswrapper[5113]: I1212 14:12:07.483494 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:07 crc kubenswrapper[5113]: E1212 14:12:07.483618 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:07 crc kubenswrapper[5113]: I1212 14:12:07.483668 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:07 crc kubenswrapper[5113]: I1212 14:12:07.483717 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:12:07 crc kubenswrapper[5113]: E1212 14:12:07.483795 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:07 crc kubenswrapper[5113]: E1212 14:12:07.483872 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:12:08 crc kubenswrapper[5113]: I1212 14:12:08.156446 5113 generic.go:358] "Generic (PLEG): container finished" podID="043e6bda-e4ff-4fbf-8925-adf929d1af6f" containerID="06b071593d5c8540037d3afe0915adf7795a0e361e73d7d8fb2a62d072dda42b" exitCode=0 Dec 12 14:12:08 crc kubenswrapper[5113]: I1212 14:12:08.157804 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" event={"ID":"043e6bda-e4ff-4fbf-8925-adf929d1af6f","Type":"ContainerDied","Data":"06b071593d5c8540037d3afe0915adf7795a0e361e73d7d8fb2a62d072dda42b"} Dec 12 14:12:08 crc kubenswrapper[5113]: I1212 14:12:08.482092 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:08 crc kubenswrapper[5113]: E1212 14:12:08.482229 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:09 crc kubenswrapper[5113]: I1212 14:12:09.162865 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" event={"ID":"043e6bda-e4ff-4fbf-8925-adf929d1af6f","Type":"ContainerStarted","Data":"d2eaa29a148508f0e03f58d5f60c9f44825b5d09303d08bcd2cf394d92955458"} Dec 12 14:12:09 crc kubenswrapper[5113]: I1212 14:12:09.224687 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xzdkb" podStartSLOduration=91.22464962 podStartE2EDuration="1m31.22464962s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:09.195159986 +0000 UTC m=+112.030409833" watchObservedRunningTime="2025-12-12 14:12:09.22464962 +0000 UTC m=+112.059899447" Dec 12 14:12:09 crc kubenswrapper[5113]: I1212 14:12:09.225481 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jvcjp"] Dec 12 14:12:09 crc kubenswrapper[5113]: I1212 14:12:09.225729 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:12:09 crc kubenswrapper[5113]: E1212 14:12:09.225880 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:12:09 crc kubenswrapper[5113]: I1212 14:12:09.483952 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:09 crc kubenswrapper[5113]: E1212 14:12:09.484055 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:09 crc kubenswrapper[5113]: I1212 14:12:09.484384 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:09 crc kubenswrapper[5113]: E1212 14:12:09.484435 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:10 crc kubenswrapper[5113]: I1212 14:12:10.481915 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:12:10 crc kubenswrapper[5113]: I1212 14:12:10.481915 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:10 crc kubenswrapper[5113]: E1212 14:12:10.482047 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:12:10 crc kubenswrapper[5113]: E1212 14:12:10.482177 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:11 crc kubenswrapper[5113]: I1212 14:12:11.482425 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:11 crc kubenswrapper[5113]: I1212 14:12:11.482460 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:11 crc kubenswrapper[5113]: E1212 14:12:11.482593 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:11 crc kubenswrapper[5113]: E1212 14:12:11.482732 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:12 crc kubenswrapper[5113]: I1212 14:12:12.482582 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:12 crc kubenswrapper[5113]: E1212 14:12:12.482790 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:12 crc kubenswrapper[5113]: I1212 14:12:12.482841 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:12:12 crc kubenswrapper[5113]: E1212 14:12:12.482935 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:12:13 crc kubenswrapper[5113]: I1212 14:12:13.482139 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:13 crc kubenswrapper[5113]: E1212 14:12:13.482302 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 14:12:13 crc kubenswrapper[5113]: I1212 14:12:13.482384 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:13 crc kubenswrapper[5113]: E1212 14:12:13.482598 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 14:12:14 crc kubenswrapper[5113]: I1212 14:12:14.481766 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:14 crc kubenswrapper[5113]: I1212 14:12:14.481846 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:12:14 crc kubenswrapper[5113]: E1212 14:12:14.482808 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 14:12:14 crc kubenswrapper[5113]: E1212 14:12:14.482882 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jvcjp" podUID="357b225a-0c71-40ba-ac24-d769a9ff3f07" Dec 12 14:12:14 crc kubenswrapper[5113]: I1212 14:12:14.595887 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 12 14:12:14 crc kubenswrapper[5113]: I1212 14:12:14.596096 5113 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 12 14:12:14 crc kubenswrapper[5113]: I1212 14:12:14.630372 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-c6kfp"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.062998 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.067045 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.067057 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.067243 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.069428 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-btqmz"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.069590 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.069988 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.070479 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.070941 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.076436 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.076530 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.076981 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.078789 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.078956 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.080015 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.080371 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.081110 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.081436 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.084545 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.159286 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.159543 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.161767 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.161857 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.161775 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.162082 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.162538 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.162892 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.162947 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.163071 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.163296 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.163315 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.168470 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.197874 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-audit\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.197932 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec404d0b-005f-4f01-80db-d36605948e5c-tmp\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.197954 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-config\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.197973 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebccfef3-840e-48d6-9da8-c61502a955fa-etcd-client\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.197992 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-config\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.198195 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48786\" (UniqueName: \"kubernetes.io/projected/ec404d0b-005f-4f01-80db-d36605948e5c-kube-api-access-48786\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.198303 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.198342 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-image-import-ca\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.198374 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-client-ca\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.198398 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz5wz\" (UniqueName: \"kubernetes.io/projected/ebccfef3-840e-48d6-9da8-c61502a955fa-kube-api-access-nz5wz\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.198445 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebccfef3-840e-48d6-9da8-c61502a955fa-audit-dir\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.199387 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebccfef3-840e-48d6-9da8-c61502a955fa-encryption-config\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.199461 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ebccfef3-840e-48d6-9da8-c61502a955fa-node-pullsecrets\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.199518 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.199578 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec404d0b-005f-4f01-80db-d36605948e5c-serving-cert\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.199638 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.199682 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebccfef3-840e-48d6-9da8-c61502a955fa-serving-cert\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.253892 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-lh95k"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.254080 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.257412 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.257848 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.258725 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.259220 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.259269 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.259335 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.264184 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300254 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nz5wz\" (UniqueName: \"kubernetes.io/projected/ebccfef3-840e-48d6-9da8-c61502a955fa-kube-api-access-nz5wz\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300312 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebccfef3-840e-48d6-9da8-c61502a955fa-audit-dir\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300339 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2009232f-996d-48ae-a590-9381aaea1db9-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300366 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebccfef3-840e-48d6-9da8-c61502a955fa-encryption-config\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300406 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ebccfef3-840e-48d6-9da8-c61502a955fa-node-pullsecrets\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300425 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300441 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec404d0b-005f-4f01-80db-d36605948e5c-serving-cert\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300457 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2009232f-996d-48ae-a590-9381aaea1db9-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300480 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300482 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebccfef3-840e-48d6-9da8-c61502a955fa-audit-dir\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300505 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebccfef3-840e-48d6-9da8-c61502a955fa-serving-cert\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300642 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2009232f-996d-48ae-a590-9381aaea1db9-serving-cert\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300722 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-audit\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300764 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec404d0b-005f-4f01-80db-d36605948e5c-tmp\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300791 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2009232f-996d-48ae-a590-9381aaea1db9-config\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300825 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-config\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300850 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxsw9\" (UniqueName: \"kubernetes.io/projected/2009232f-996d-48ae-a590-9381aaea1db9-kube-api-access-fxsw9\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300874 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebccfef3-840e-48d6-9da8-c61502a955fa-etcd-client\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300899 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-config\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300940 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-48786\" (UniqueName: \"kubernetes.io/projected/ec404d0b-005f-4f01-80db-d36605948e5c-kube-api-access-48786\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.300980 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.301010 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-image-import-ca\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.301044 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-client-ca\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.301415 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ebccfef3-840e-48d6-9da8-c61502a955fa-node-pullsecrets\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.301858 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec404d0b-005f-4f01-80db-d36605948e5c-tmp\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.302376 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.302394 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-config\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.302614 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-audit\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.302827 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-image-import-ca\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.303038 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-config\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.303100 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebccfef3-840e-48d6-9da8-c61502a955fa-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.303564 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-client-ca\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.303946 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.306842 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebccfef3-840e-48d6-9da8-c61502a955fa-etcd-client\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.307108 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebccfef3-840e-48d6-9da8-c61502a955fa-serving-cert\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.307218 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec404d0b-005f-4f01-80db-d36605948e5c-serving-cert\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.307453 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebccfef3-840e-48d6-9da8-c61502a955fa-encryption-config\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.320604 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz5wz\" (UniqueName: \"kubernetes.io/projected/ebccfef3-840e-48d6-9da8-c61502a955fa-kube-api-access-nz5wz\") pod \"apiserver-9ddfb9f55-btqmz\" (UID: \"ebccfef3-840e-48d6-9da8-c61502a955fa\") " pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.320938 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-48786\" (UniqueName: \"kubernetes.io/projected/ec404d0b-005f-4f01-80db-d36605948e5c-kube-api-access-48786\") pod \"controller-manager-65b6cccf98-c6kfp\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.380038 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.394190 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-xfscs"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.394448 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.396946 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.397524 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.397725 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.397774 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.398319 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.398834 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.402458 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2009232f-996d-48ae-a590-9381aaea1db9-config\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.402507 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fxsw9\" (UniqueName: \"kubernetes.io/projected/2009232f-996d-48ae-a590-9381aaea1db9-kube-api-access-fxsw9\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.402579 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2009232f-996d-48ae-a590-9381aaea1db9-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.402617 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2009232f-996d-48ae-a590-9381aaea1db9-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.403366 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2009232f-996d-48ae-a590-9381aaea1db9-serving-cert\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.403424 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2009232f-996d-48ae-a590-9381aaea1db9-config\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.404285 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2009232f-996d-48ae-a590-9381aaea1db9-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.404331 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2009232f-996d-48ae-a590-9381aaea1db9-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.410036 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2009232f-996d-48ae-a590-9381aaea1db9-serving-cert\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.419530 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxsw9\" (UniqueName: \"kubernetes.io/projected/2009232f-996d-48ae-a590-9381aaea1db9-kube-api-access-fxsw9\") pod \"authentication-operator-7f5c659b84-49qb7\" (UID: \"2009232f-996d-48ae-a590-9381aaea1db9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.485525 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.503930 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4375bb7b-b15a-46f7-b092-fbd3c7a247fb-config\") pod \"machine-api-operator-755bb95488-lh95k\" (UID: \"4375bb7b-b15a-46f7-b092-fbd3c7a247fb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.503964 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4375bb7b-b15a-46f7-b092-fbd3c7a247fb-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-lh95k\" (UID: \"4375bb7b-b15a-46f7-b092-fbd3c7a247fb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.504060 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb7cc\" (UniqueName: \"kubernetes.io/projected/4375bb7b-b15a-46f7-b092-fbd3c7a247fb-kube-api-access-kb7cc\") pod \"machine-api-operator-755bb95488-lh95k\" (UID: \"4375bb7b-b15a-46f7-b092-fbd3c7a247fb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.504239 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4375bb7b-b15a-46f7-b092-fbd3c7a247fb-images\") pod \"machine-api-operator-755bb95488-lh95k\" (UID: \"4375bb7b-b15a-46f7-b092-fbd3c7a247fb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.571178 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.605428 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4375bb7b-b15a-46f7-b092-fbd3c7a247fb-config\") pod \"machine-api-operator-755bb95488-lh95k\" (UID: \"4375bb7b-b15a-46f7-b092-fbd3c7a247fb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.605498 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4375bb7b-b15a-46f7-b092-fbd3c7a247fb-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-lh95k\" (UID: \"4375bb7b-b15a-46f7-b092-fbd3c7a247fb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.605524 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kb7cc\" (UniqueName: \"kubernetes.io/projected/4375bb7b-b15a-46f7-b092-fbd3c7a247fb-kube-api-access-kb7cc\") pod \"machine-api-operator-755bb95488-lh95k\" (UID: \"4375bb7b-b15a-46f7-b092-fbd3c7a247fb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.605559 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4375bb7b-b15a-46f7-b092-fbd3c7a247fb-images\") pod \"machine-api-operator-755bb95488-lh95k\" (UID: \"4375bb7b-b15a-46f7-b092-fbd3c7a247fb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.606489 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4375bb7b-b15a-46f7-b092-fbd3c7a247fb-images\") pod \"machine-api-operator-755bb95488-lh95k\" (UID: \"4375bb7b-b15a-46f7-b092-fbd3c7a247fb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.607166 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4375bb7b-b15a-46f7-b092-fbd3c7a247fb-config\") pod \"machine-api-operator-755bb95488-lh95k\" (UID: \"4375bb7b-b15a-46f7-b092-fbd3c7a247fb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.610519 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4375bb7b-b15a-46f7-b092-fbd3c7a247fb-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-lh95k\" (UID: \"4375bb7b-b15a-46f7-b092-fbd3c7a247fb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.617323 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.617912 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.625674 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.626502 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.626712 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.626996 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.629828 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb7cc\" (UniqueName: \"kubernetes.io/projected/4375bb7b-b15a-46f7-b092-fbd3c7a247fb-kube-api-access-kb7cc\") pod \"machine-api-operator-755bb95488-lh95k\" (UID: \"4375bb7b-b15a-46f7-b092-fbd3c7a247fb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.638925 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2t5sb"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.639160 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.643667 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.644356 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.648186 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.649608 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.649841 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.649996 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.661614 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.662717 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.671073 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.672882 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.673665 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.673947 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.674150 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.674461 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.674522 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.674601 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.674653 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.674762 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.675226 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.678606 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.679110 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.686898 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.692690 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.696266 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.696600 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.703614 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.703850 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.708891 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.708931 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.708988 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.709022 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/610b31b0-b3fb-4878-be26-48601b1cb9d0-available-featuregates\") pod \"openshift-config-operator-5777786469-xfscs\" (UID: \"610b31b0-b3fb-4878-be26-48601b1cb9d0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.709063 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl2x2\" (UniqueName: \"kubernetes.io/projected/610b31b0-b3fb-4878-be26-48601b1cb9d0-kube-api-access-fl2x2\") pod \"openshift-config-operator-5777786469-xfscs\" (UID: \"610b31b0-b3fb-4878-be26-48601b1cb9d0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.709093 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f07154ae-c1f9-42af-9327-e211d7199c82-tmp\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.709131 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.709164 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.709184 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.709210 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f07154ae-c1f9-42af-9327-e211d7199c82-config\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.709226 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f07154ae-c1f9-42af-9327-e211d7199c82-serving-cert\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.709243 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.726740 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.709261 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.730438 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q78t6\" (UniqueName: \"kubernetes.io/projected/f07154ae-c1f9-42af-9327-e211d7199c82-kube-api-access-q78t6\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.730534 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w8z7\" (UniqueName: \"kubernetes.io/projected/c55aed1a-bd22-4591-9394-247b0dbca87d-kube-api-access-8w8z7\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.730615 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.730685 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.730727 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-audit-policies\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.730782 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c55aed1a-bd22-4591-9394-247b0dbca87d-audit-dir\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.730807 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f07154ae-c1f9-42af-9327-e211d7199c82-client-ca\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.730827 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.730890 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610b31b0-b3fb-4878-be26-48601b1cb9d0-serving-cert\") pod \"openshift-config-operator-5777786469-xfscs\" (UID: \"610b31b0-b3fb-4878-be26-48601b1cb9d0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.753423 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-d99w4"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.753924 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.757503 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.759629 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.759831 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.761024 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.761105 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.761241 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.765875 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-zx9gm"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.768484 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.771881 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.783746 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.784062 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.784582 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.787265 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.787487 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.787580 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.788288 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.792208 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.792674 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.794234 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-f4974"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.794717 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.847152 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.847201 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.847388 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.847463 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.847676 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.848824 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/00752359-7fda-4a4a-bddd-47c8c0939d7f-machine-approver-tls\") pod \"machine-approver-54c688565-d99w4\" (UID: \"00752359-7fda-4a4a-bddd-47c8c0939d7f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.848870 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/610b31b0-b3fb-4878-be26-48601b1cb9d0-available-featuregates\") pod \"openshift-config-operator-5777786469-xfscs\" (UID: \"610b31b0-b3fb-4878-be26-48601b1cb9d0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.848890 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75rdx\" (UniqueName: \"kubernetes.io/projected/00752359-7fda-4a4a-bddd-47c8c0939d7f-kube-api-access-75rdx\") pod \"machine-approver-54c688565-d99w4\" (UID: \"00752359-7fda-4a4a-bddd-47c8c0939d7f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.848908 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fl2x2\" (UniqueName: \"kubernetes.io/projected/610b31b0-b3fb-4878-be26-48601b1cb9d0-kube-api-access-fl2x2\") pod \"openshift-config-operator-5777786469-xfscs\" (UID: \"610b31b0-b3fb-4878-be26-48601b1cb9d0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.848923 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1-config\") pod \"openshift-apiserver-operator-846cbfc458-w4zp6\" (UID: \"3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.848951 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f07154ae-c1f9-42af-9327-e211d7199c82-tmp\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.848972 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.848994 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-console-config\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849010 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00752359-7fda-4a4a-bddd-47c8c0939d7f-auth-proxy-config\") pod \"machine-approver-54c688565-d99w4\" (UID: \"00752359-7fda-4a4a-bddd-47c8c0939d7f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849031 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849048 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849070 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f07154ae-c1f9-42af-9327-e211d7199c82-config\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849083 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f07154ae-c1f9-42af-9327-e211d7199c82-serving-cert\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849113 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849148 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849171 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q78t6\" (UniqueName: \"kubernetes.io/projected/f07154ae-c1f9-42af-9327-e211d7199c82-kube-api-access-q78t6\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849192 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8w8z7\" (UniqueName: \"kubernetes.io/projected/c55aed1a-bd22-4591-9394-247b0dbca87d-kube-api-access-8w8z7\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849209 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckts\" (UniqueName: \"kubernetes.io/projected/3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1-kube-api-access-gckts\") pod \"openshift-apiserver-operator-846cbfc458-w4zp6\" (UID: \"3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849233 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849250 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-w4zp6\" (UID: \"3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849330 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849380 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6hjr\" (UniqueName: \"kubernetes.io/projected/0bac4af3-97cb-49b6-bd5a-c238ce8aefe0-kube-api-access-m6hjr\") pod \"cluster-samples-operator-6b564684c8-vc9l4\" (UID: \"0bac4af3-97cb-49b6-bd5a-c238ce8aefe0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849410 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4479\" (UniqueName: \"kubernetes.io/projected/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-kube-api-access-m4479\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849480 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-audit-policies\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849512 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c55aed1a-bd22-4591-9394-247b0dbca87d-audit-dir\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849545 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f07154ae-c1f9-42af-9327-e211d7199c82-client-ca\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849571 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849649 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610b31b0-b3fb-4878-be26-48601b1cb9d0-serving-cert\") pod \"openshift-config-operator-5777786469-xfscs\" (UID: \"610b31b0-b3fb-4878-be26-48601b1cb9d0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849677 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-service-ca\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849709 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-console-oauth-config\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849768 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849797 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849830 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0bac4af3-97cb-49b6-bd5a-c238ce8aefe0-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vc9l4\" (UID: \"0bac4af3-97cb-49b6-bd5a-c238ce8aefe0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849854 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00752359-7fda-4a4a-bddd-47c8c0939d7f-config\") pod \"machine-approver-54c688565-d99w4\" (UID: \"00752359-7fda-4a4a-bddd-47c8c0939d7f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849903 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849964 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-console-serving-cert\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849989 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.850840 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.851144 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.851311 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.851463 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.851721 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.849989 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-oauth-serving-cert\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.851860 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-trusted-ca-bundle\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.852195 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/610b31b0-b3fb-4878-be26-48601b1cb9d0-available-featuregates\") pod \"openshift-config-operator-5777786469-xfscs\" (UID: \"610b31b0-b3fb-4878-be26-48601b1cb9d0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.852201 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.852443 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.852456 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.852579 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.851861 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-audit-policies\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.853398 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.856923 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.857192 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.857368 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.857802 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.857939 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.858110 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.859980 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f07154ae-c1f9-42af-9327-e211d7199c82-serving-cert\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.860282 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.861292 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-cd7rw"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.861763 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.861841 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.861915 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.863471 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.866175 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.866203 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.866512 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.866543 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.866637 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.866664 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.866759 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.866805 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.866859 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.866938 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.866982 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.867258 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.867346 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.867830 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.867955 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.870240 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-wnqfw"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.871672 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f07154ae-c1f9-42af-9327-e211d7199c82-tmp\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.872389 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.873435 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f07154ae-c1f9-42af-9327-e211d7199c82-config\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.875477 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c55aed1a-bd22-4591-9394-247b0dbca87d-audit-dir\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.876271 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f07154ae-c1f9-42af-9327-e211d7199c82-client-ca\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.876750 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.878583 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.878774 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.879095 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.880018 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.880268 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.882300 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.886406 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.888270 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.890797 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.890927 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-wnqfw" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.891072 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w8z7\" (UniqueName: \"kubernetes.io/projected/c55aed1a-bd22-4591-9394-247b0dbca87d-kube-api-access-8w8z7\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.894268 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl2x2\" (UniqueName: \"kubernetes.io/projected/610b31b0-b3fb-4878-be26-48601b1cb9d0-kube-api-access-fl2x2\") pod \"openshift-config-operator-5777786469-xfscs\" (UID: \"610b31b0-b3fb-4878-be26-48601b1cb9d0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.896158 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.896551 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-sntfz"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.896754 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.901791 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.901791 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q78t6\" (UniqueName: \"kubernetes.io/projected/f07154ae-c1f9-42af-9327-e211d7199c82-kube-api-access-q78t6\") pod \"route-controller-manager-776cdc94d6-wh8pj\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.901855 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610b31b0-b3fb-4878-be26-48601b1cb9d0-serving-cert\") pod \"openshift-config-operator-5777786469-xfscs\" (UID: \"610b31b0-b3fb-4878-be26-48601b1cb9d0\") " pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.902534 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-2t5sb\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.907669 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.909783 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-n69j2"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.910024 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.913622 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-cmps6"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.913936 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-n69j2" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.935581 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.940226 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.958289 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.958537 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.965705 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.974858 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.976058 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.976826 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-console-serving-cert\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.976957 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c087108f-8a95-41df-be4f-7f61b16f4b74-config\") pod \"openshift-controller-manager-operator-686468bdd5-7brq4\" (UID: \"c087108f-8a95-41df-be4f-7f61b16f4b74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.977060 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-oauth-serving-cert\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.977160 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-trusted-ca-bundle\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.977232 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/00752359-7fda-4a4a-bddd-47c8c0939d7f-machine-approver-tls\") pod \"machine-approver-54c688565-d99w4\" (UID: \"00752359-7fda-4a4a-bddd-47c8c0939d7f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.977343 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-75rdx\" (UniqueName: \"kubernetes.io/projected/00752359-7fda-4a4a-bddd-47c8c0939d7f-kube-api-access-75rdx\") pod \"machine-approver-54c688565-d99w4\" (UID: \"00752359-7fda-4a4a-bddd-47c8c0939d7f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.977443 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1-config\") pod \"openshift-apiserver-operator-846cbfc458-w4zp6\" (UID: \"3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.977548 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-console-config\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.977649 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00752359-7fda-4a4a-bddd-47c8c0939d7f-auth-proxy-config\") pod \"machine-approver-54c688565-d99w4\" (UID: \"00752359-7fda-4a4a-bddd-47c8c0939d7f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.977757 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaa533e2-d1a2-4931-9a85-2584a1d06c96-kube-api-access\") pod \"kube-apiserver-operator-575994946d-79b8w\" (UID: \"eaa533e2-d1a2-4931-9a85-2584a1d06c96\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.977842 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.977925 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.978021 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnrb4\" (UniqueName: \"kubernetes.io/projected/4723fa2f-a114-4d27-875f-951678d39dde-kube-api-access-jnrb4\") pod \"downloads-747b44746d-wnqfw\" (UID: \"4723fa2f-a114-4d27-875f-951678d39dde\") " pod="openshift-console/downloads-747b44746d-wnqfw" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.978110 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gckts\" (UniqueName: \"kubernetes.io/projected/3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1-kube-api-access-gckts\") pod \"openshift-apiserver-operator-846cbfc458-w4zp6\" (UID: \"3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.978220 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-w4zp6\" (UID: \"3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.978293 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eaa533e2-d1a2-4931-9a85-2584a1d06c96-tmp-dir\") pod \"kube-apiserver-operator-575994946d-79b8w\" (UID: \"eaa533e2-d1a2-4931-9a85-2584a1d06c96\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.978367 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c087108f-8a95-41df-be4f-7f61b16f4b74-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7brq4\" (UID: \"c087108f-8a95-41df-be4f-7f61b16f4b74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.978458 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m6hjr\" (UniqueName: \"kubernetes.io/projected/0bac4af3-97cb-49b6-bd5a-c238ce8aefe0-kube-api-access-m6hjr\") pod \"cluster-samples-operator-6b564684c8-vc9l4\" (UID: \"0bac4af3-97cb-49b6-bd5a-c238ce8aefe0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.978592 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m4479\" (UniqueName: \"kubernetes.io/projected/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-kube-api-access-m4479\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.978723 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rttlg\" (UniqueName: \"kubernetes.io/projected/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-kube-api-access-rttlg\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.978851 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-service-ca\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.978956 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa533e2-d1a2-4931-9a85-2584a1d06c96-config\") pod \"kube-apiserver-operator-575994946d-79b8w\" (UID: \"eaa533e2-d1a2-4931-9a85-2584a1d06c96\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.979059 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-console-oauth-config\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.979176 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c087108f-8a95-41df-be4f-7f61b16f4b74-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7brq4\" (UID: \"c087108f-8a95-41df-be4f-7f61b16f4b74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.979270 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.979387 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.979480 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-oauth-serving-cert\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.979498 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0bac4af3-97cb-49b6-bd5a-c238ce8aefe0-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vc9l4\" (UID: \"0bac4af3-97cb-49b6-bd5a-c238ce8aefe0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.979664 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00752359-7fda-4a4a-bddd-47c8c0939d7f-config\") pod \"machine-approver-54c688565-d99w4\" (UID: \"00752359-7fda-4a4a-bddd-47c8c0939d7f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.979774 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa533e2-d1a2-4931-9a85-2584a1d06c96-serving-cert\") pod \"kube-apiserver-operator-575994946d-79b8w\" (UID: \"eaa533e2-d1a2-4931-9a85-2584a1d06c96\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.979913 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv2mk\" (UniqueName: \"kubernetes.io/projected/c087108f-8a95-41df-be4f-7f61b16f4b74-kube-api-access-dv2mk\") pod \"openshift-controller-manager-operator-686468bdd5-7brq4\" (UID: \"c087108f-8a95-41df-be4f-7f61b16f4b74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.980028 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.981608 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.982838 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-trusted-ca-bundle\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.983359 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-service-ca\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.978887 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-95skb"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.986207 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-console-serving-cert\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.987247 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00752359-7fda-4a4a-bddd-47c8c0939d7f-config\") pod \"machine-approver-54c688565-d99w4\" (UID: \"00752359-7fda-4a4a-bddd-47c8c0939d7f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.987387 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00752359-7fda-4a4a-bddd-47c8c0939d7f-auth-proxy-config\") pod \"machine-approver-54c688565-d99w4\" (UID: \"00752359-7fda-4a4a-bddd-47c8c0939d7f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.987622 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1-config\") pod \"openshift-apiserver-operator-846cbfc458-w4zp6\" (UID: \"3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.987647 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-console-config\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.988186 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-w4zp6\" (UID: \"3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.990736 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/00752359-7fda-4a4a-bddd-47c8c0939d7f-machine-approver-tls\") pod \"machine-approver-54c688565-d99w4\" (UID: \"00752359-7fda-4a4a-bddd-47c8c0939d7f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.991631 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-console-oauth-config\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.991796 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0bac4af3-97cb-49b6-bd5a-c238ce8aefe0-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vc9l4\" (UID: \"0bac4af3-97cb-49b6-bd5a-c238ce8aefe0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.992834 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.992980 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.994990 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.996710 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7"] Dec 12 14:12:16 crc kubenswrapper[5113]: I1212 14:12:16.997427 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.005514 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.005853 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.007851 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.011882 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.014023 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.014218 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.023911 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.028389 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.028494 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.030268 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.032623 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.032756 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.038022 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.038221 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-zqpbf"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.038600 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.038611 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.042790 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-2qnd9"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.043016 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-zqpbf" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.046850 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-xfscs"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.047293 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.047147 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.052210 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.054771 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.055137 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.055242 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.054909 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.065489 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.070501 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.071448 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.082374 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-btqmz"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.082426 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5cr2b"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.082558 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083027 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083063 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083100 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jnrb4\" (UniqueName: \"kubernetes.io/projected/4723fa2f-a114-4d27-875f-951678d39dde-kube-api-access-jnrb4\") pod \"downloads-747b44746d-wnqfw\" (UID: \"4723fa2f-a114-4d27-875f-951678d39dde\") " pod="openshift-console/downloads-747b44746d-wnqfw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083331 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eaa533e2-d1a2-4931-9a85-2584a1d06c96-tmp-dir\") pod \"kube-apiserver-operator-575994946d-79b8w\" (UID: \"eaa533e2-d1a2-4931-9a85-2584a1d06c96\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083354 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c087108f-8a95-41df-be4f-7f61b16f4b74-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7brq4\" (UID: \"c087108f-8a95-41df-be4f-7f61b16f4b74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083385 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rttlg\" (UniqueName: \"kubernetes.io/projected/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-kube-api-access-rttlg\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083426 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa533e2-d1a2-4931-9a85-2584a1d06c96-config\") pod \"kube-apiserver-operator-575994946d-79b8w\" (UID: \"eaa533e2-d1a2-4931-9a85-2584a1d06c96\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083450 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c087108f-8a95-41df-be4f-7f61b16f4b74-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7brq4\" (UID: \"c087108f-8a95-41df-be4f-7f61b16f4b74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083473 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083503 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083547 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa533e2-d1a2-4931-9a85-2584a1d06c96-serving-cert\") pod \"kube-apiserver-operator-575994946d-79b8w\" (UID: \"eaa533e2-d1a2-4931-9a85-2584a1d06c96\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083569 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dv2mk\" (UniqueName: \"kubernetes.io/projected/c087108f-8a95-41df-be4f-7f61b16f4b74-kube-api-access-dv2mk\") pod \"openshift-controller-manager-operator-686468bdd5-7brq4\" (UID: \"c087108f-8a95-41df-be4f-7f61b16f4b74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083593 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083807 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c087108f-8a95-41df-be4f-7f61b16f4b74-config\") pod \"openshift-controller-manager-operator-686468bdd5-7brq4\" (UID: \"c087108f-8a95-41df-be4f-7f61b16f4b74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.083933 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaa533e2-d1a2-4931-9a85-2584a1d06c96-kube-api-access\") pod \"kube-apiserver-operator-575994946d-79b8w\" (UID: \"eaa533e2-d1a2-4931-9a85-2584a1d06c96\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.084492 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.084822 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eaa533e2-d1a2-4931-9a85-2584a1d06c96-tmp-dir\") pod \"kube-apiserver-operator-575994946d-79b8w\" (UID: \"eaa533e2-d1a2-4931-9a85-2584a1d06c96\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.085284 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.085379 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c087108f-8a95-41df-be4f-7f61b16f4b74-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7brq4\" (UID: \"c087108f-8a95-41df-be4f-7f61b16f4b74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.085980 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c087108f-8a95-41df-be4f-7f61b16f4b74-config\") pod \"openshift-controller-manager-operator-686468bdd5-7brq4\" (UID: \"c087108f-8a95-41df-be4f-7f61b16f4b74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.086001 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.087512 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4nblp"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.088277 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5cr2b" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.088562 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.093323 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa533e2-d1a2-4931-9a85-2584a1d06c96-config\") pod \"kube-apiserver-operator-575994946d-79b8w\" (UID: \"eaa533e2-d1a2-4931-9a85-2584a1d06c96\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.094552 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-brp2v"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.094739 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-4nblp" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.095717 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c087108f-8a95-41df-be4f-7f61b16f4b74-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7brq4\" (UID: \"c087108f-8a95-41df-be4f-7f61b16f4b74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.101869 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa533e2-d1a2-4931-9a85-2584a1d06c96-serving-cert\") pod \"kube-apiserver-operator-575994946d-79b8w\" (UID: \"eaa533e2-d1a2-4931-9a85-2584a1d06c96\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.102080 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.106532 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108308 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-c6kfp"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108351 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108367 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108383 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2t5sb"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108398 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108410 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-sntfz"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108421 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108433 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-f4974"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108444 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108456 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108467 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-lh95k"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108480 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-cd7rw"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108492 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108503 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-n69j2"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108515 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108527 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108538 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108550 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-zqpbf"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108561 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-zx9gm"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108572 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108582 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108593 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-wnqfw"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108603 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108614 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-cmps6"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108624 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.108633 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.109257 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rlhjs"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.109053 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-brp2v" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.115016 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4nblp"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.115075 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.115107 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-2qnd9"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.115138 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.115153 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-brp2v"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.115169 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qdlcg"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.115275 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.119411 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rlhjs"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.119447 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.119461 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qdlcg"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.119474 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rlsj9"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.119676 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.123745 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.123905 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-btqmz"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.124039 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.125910 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.146713 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.163865 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-lh95k"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.165232 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-c6kfp"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.166814 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 12 14:12:17 crc kubenswrapper[5113]: W1212 14:12:17.180826 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec404d0b_005f_4f01_80db_d36605948e5c.slice/crio-4e595f1dfbbffface8699c5529df1b36a5020e03a41978f352fbc40d14865898 WatchSource:0}: Error finding container 4e595f1dfbbffface8699c5529df1b36a5020e03a41978f352fbc40d14865898: Status 404 returned error can't find the container with id 4e595f1dfbbffface8699c5529df1b36a5020e03a41978f352fbc40d14865898 Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.192936 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.193278 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" event={"ID":"ec404d0b-005f-4f01-80db-d36605948e5c","Type":"ContainerStarted","Data":"4e595f1dfbbffface8699c5529df1b36a5020e03a41978f352fbc40d14865898"} Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.202090 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" event={"ID":"2009232f-996d-48ae-a590-9381aaea1db9","Type":"ContainerStarted","Data":"224636bdd9d069dd7f00e0bb90a22fec95acc116f55d4d4253882fcdfc56797b"} Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.207754 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.208317 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" event={"ID":"ebccfef3-840e-48d6-9da8-c61502a955fa","Type":"ContainerStarted","Data":"eb598e798fd0f3e2b0f1126d96eaa855041b2601b372e8269481311d6b9845cf"} Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.226525 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.246934 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.247710 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-xfscs"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.266288 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.278397 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj"] Dec 12 14:12:17 crc kubenswrapper[5113]: W1212 14:12:17.280051 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod610b31b0_b3fb_4878_be26_48601b1cb9d0.slice/crio-acee7daa66044ce553274ec4ff58e675e4bc6e6e29b5e07e7e1a5e750e06cee5 WatchSource:0}: Error finding container acee7daa66044ce553274ec4ff58e675e4bc6e6e29b5e07e7e1a5e750e06cee5: Status 404 returned error can't find the container with id acee7daa66044ce553274ec4ff58e675e4bc6e6e29b5e07e7e1a5e750e06cee5 Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.285028 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.305056 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.313264 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2t5sb"] Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.349469 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.368185 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.386969 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.405770 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.426467 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.476981 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.478045 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.484997 5113 scope.go:117] "RemoveContainer" containerID="b77d3f8a0aa844d1d9bce00a4fbc928f96ec80e137240a6c83676a75400166f5" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.492850 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.505777 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.545186 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.607134 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.607288 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.607340 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.627925 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.729416 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.746208 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.765367 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.788822 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.807539 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.807791 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-etcd-serving-ca\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.807874 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-audit-policies\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.807924 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4df8836-f797-48b9-905f-5790efb2e6af-config\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.807945 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4df8836-f797-48b9-905f-5790efb2e6af-serving-cert\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.807981 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87300cd0-fd46-44e7-9925-c8cf3322b686-trusted-ca\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.807999 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qb8t\" (UniqueName: \"kubernetes.io/projected/d4df8836-f797-48b9-905f-5790efb2e6af-kube-api-access-6qb8t\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808061 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87300cd0-fd46-44e7-9925-c8cf3322b686-installation-pull-secrets\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808082 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-bound-sa-token\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808103 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-tls\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808181 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808204 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-certificates\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808222 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-encryption-config\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808361 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-serving-cert\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808402 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2a79044c-9f1f-4d59-8e63-e138868ebdd2-audit-dir\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808443 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87300cd0-fd46-44e7-9925-c8cf3322b686-ca-trust-extracted\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808470 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4df8836-f797-48b9-905f-5790efb2e6af-trusted-ca\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808534 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808561 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5lvp\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-kube-api-access-v5lvp\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808586 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl8dl\" (UniqueName: \"kubernetes.io/projected/2a79044c-9f1f-4d59-8e63-e138868ebdd2-kube-api-access-wl8dl\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.808610 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-etcd-client\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: E1212 14:12:17.809313 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:18.309296802 +0000 UTC m=+121.144546629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.833461 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.845405 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.866704 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.885933 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.907928 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.910152 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:17 crc kubenswrapper[5113]: E1212 14:12:17.910356 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:18.410321236 +0000 UTC m=+121.245571063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.910425 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhjxh\" (UniqueName: \"kubernetes.io/projected/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-kube-api-access-rhjxh\") pod \"cni-sysctl-allowlist-ds-rlsj9\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.910459 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-encryption-config\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.910481 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-etcd-ca\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.910552 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/41890204-5a81-44d5-99eb-82be690cc03d-images\") pod \"machine-config-operator-67c9d58cbb-jtjg7\" (UID: \"41890204-5a81-44d5-99eb-82be690cc03d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.910595 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-certificates\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.910635 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmscr\" (UniqueName: \"kubernetes.io/projected/66b65774-248e-407d-9e9d-c9f770175654-kube-api-access-qmscr\") pod \"multus-admission-controller-69db94689b-4nblp\" (UID: \"66b65774-248e-407d-9e9d-c9f770175654\") " pod="openshift-multus/multus-admission-controller-69db94689b-4nblp" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.910654 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/74f90e1e-39f6-4433-9dd0-82e74f7b2b0f-tmp-dir\") pod \"dns-default-rlhjs\" (UID: \"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f\") " pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.910703 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467a9df6-b909-453f-8a3a-8fec3fd4b54f-srv-cert\") pod \"catalog-operator-75ff9f647d-qnk64\" (UID: \"467a9df6-b909-453f-8a3a-8fec3fd4b54f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.910836 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zcw5\" (UniqueName: \"kubernetes.io/projected/7cedf898-adef-4e24-9d76-b6a19006883c-kube-api-access-2zcw5\") pod \"migrator-866fcbc849-n69j2\" (UID: \"7cedf898-adef-4e24-9d76-b6a19006883c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-n69j2" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.910962 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87300cd0-fd46-44e7-9925-c8cf3322b686-ca-trust-extracted\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.910998 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2zcr\" (UniqueName: \"kubernetes.io/projected/b1779ada-55ca-4d92-acd1-7a2a9ff46b6f-kube-api-access-r2zcr\") pod \"ingress-canary-brp2v\" (UID: \"b1779ada-55ca-4d92-acd1-7a2a9ff46b6f\") " pod="openshift-ingress-canary/ingress-canary-brp2v" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911021 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/35aac665-ec48-4b0f-8e0a-b5b8b173ca1e-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-sz7rz\" (UID: \"35aac665-ec48-4b0f-8e0a-b5b8b173ca1e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911039 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-899rn\" (UniqueName: \"kubernetes.io/projected/de734fb8-e662-4172-b64e-57bb9b51c606-kube-api-access-899rn\") pod \"service-ca-operator-5b9c976747-4gtx7\" (UID: \"de734fb8-e662-4172-b64e-57bb9b51c606\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911055 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff561547-8b9b-476c-8bc4-2fe57d56bcc0-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-bvnd7\" (UID: \"ff561547-8b9b-476c-8bc4-2fe57d56bcc0\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911074 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d4e952f-6916-451e-aa61-2975f38fa7f4-srv-cert\") pod \"olm-operator-5cdf44d969-7skgn\" (UID: \"3d4e952f-6916-451e-aa61-2975f38fa7f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911097 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76704edc-840b-442f-8f26-4a5c394e5e4f-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-q57sn\" (UID: \"76704edc-840b-442f-8f26-4a5c394e5e4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911129 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctd7j\" (UniqueName: \"kubernetes.io/projected/6dc95aca-746a-4087-9143-d5e3591eb687-kube-api-access-ctd7j\") pod \"control-plane-machine-set-operator-75ffdb6fcd-v56j9\" (UID: \"6dc95aca-746a-4087-9143-d5e3591eb687\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911162 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/33e71d64-89ca-44fa-9941-337b91d25c4f-node-bootstrap-token\") pod \"machine-config-server-5cr2b\" (UID: \"33e71d64-89ca-44fa-9941-337b91d25c4f\") " pod="openshift-machine-config-operator/machine-config-server-5cr2b" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911181 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5d19b96-2e13-4b63-9055-7f8c8eb785fb-webhook-cert\") pod \"packageserver-7d4fc7d867-v88kj\" (UID: \"a5d19b96-2e13-4b63-9055-7f8c8eb785fb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911211 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911313 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zjdv\" (UniqueName: \"kubernetes.io/projected/33e71d64-89ca-44fa-9941-337b91d25c4f-kube-api-access-7zjdv\") pod \"machine-config-server-5cr2b\" (UID: \"33e71d64-89ca-44fa-9941-337b91d25c4f\") " pod="openshift-machine-config-operator/machine-config-server-5cr2b" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911400 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlwnq\" (UniqueName: \"kubernetes.io/projected/5d332a03-7aae-43d2-b644-bdffb3c9b992-kube-api-access-mlwnq\") pod \"service-ca-74545575db-zqpbf\" (UID: \"5d332a03-7aae-43d2-b644-bdffb3c9b992\") " pod="openshift-service-ca/service-ca-74545575db-zqpbf" Dec 12 14:12:17 crc kubenswrapper[5113]: E1212 14:12:17.911463 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:18.411451603 +0000 UTC m=+121.246701420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911495 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5lvp\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-kube-api-access-v5lvp\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911518 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/467a9df6-b909-453f-8a3a-8fec3fd4b54f-tmpfs\") pod \"catalog-operator-75ff9f647d-qnk64\" (UID: \"467a9df6-b909-453f-8a3a-8fec3fd4b54f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911538 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-2qnd9\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911556 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-ready\") pod \"cni-sysctl-allowlist-ds-rlsj9\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911573 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-socket-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911594 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-stats-auth\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911621 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v495z\" (UniqueName: \"kubernetes.io/projected/ff561547-8b9b-476c-8bc4-2fe57d56bcc0-kube-api-access-v495z\") pod \"ingress-operator-6b9cb4dbcf-bvnd7\" (UID: \"ff561547-8b9b-476c-8bc4-2fe57d56bcc0\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911677 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzh66\" (UniqueName: \"kubernetes.io/projected/74f90e1e-39f6-4433-9dd0-82e74f7b2b0f-kube-api-access-fzh66\") pod \"dns-default-rlhjs\" (UID: \"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f\") " pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911728 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-etcd-serving-ca\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911797 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d4e952f-6916-451e-aa61-2975f38fa7f4-profile-collector-cert\") pod \"olm-operator-5cdf44d969-7skgn\" (UID: \"3d4e952f-6916-451e-aa61-2975f38fa7f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911818 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de734fb8-e662-4172-b64e-57bb9b51c606-config\") pod \"service-ca-operator-5b9c976747-4gtx7\" (UID: \"de734fb8-e662-4172-b64e-57bb9b51c606\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911839 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rlsj9\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911857 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-plugins-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911876 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh5ws\" (UniqueName: \"kubernetes.io/projected/3d4e952f-6916-451e-aa61-2975f38fa7f4-kube-api-access-kh5ws\") pod \"olm-operator-5cdf44d969-7skgn\" (UID: \"3d4e952f-6916-451e-aa61-2975f38fa7f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911900 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1db49e24-69a6-47c8-b689-df0b1754efac-metrics-tls\") pod \"dns-operator-799b87ffcd-cmps6\" (UID: \"1db49e24-69a6-47c8-b689-df0b1754efac\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911969 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c12c797-4410-4399-8ea4-138a53a8ef49-secret-volume\") pod \"collect-profiles-29425800-ks4pt\" (UID: \"1c12c797-4410-4399-8ea4-138a53a8ef49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.911993 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4df8836-f797-48b9-905f-5790efb2e6af-config\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912045 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5d332a03-7aae-43d2-b644-bdffb3c9b992-signing-key\") pod \"service-ca-74545575db-zqpbf\" (UID: \"5d332a03-7aae-43d2-b644-bdffb3c9b992\") " pod="openshift-service-ca/service-ca-74545575db-zqpbf" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912062 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a5d19b96-2e13-4b63-9055-7f8c8eb785fb-tmpfs\") pod \"packageserver-7d4fc7d867-v88kj\" (UID: \"a5d19b96-2e13-4b63-9055-7f8c8eb785fb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912190 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-mountpoint-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912291 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-csi-data-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912338 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff561547-8b9b-476c-8bc4-2fe57d56bcc0-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-bvnd7\" (UID: \"ff561547-8b9b-476c-8bc4-2fe57d56bcc0\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912529 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/35aac665-ec48-4b0f-8e0a-b5b8b173ca1e-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-sz7rz\" (UID: \"35aac665-ec48-4b0f-8e0a-b5b8b173ca1e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912574 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/66b65774-248e-407d-9e9d-c9f770175654-webhook-certs\") pod \"multus-admission-controller-69db94689b-4nblp\" (UID: \"66b65774-248e-407d-9e9d-c9f770175654\") " pod="openshift-multus/multus-admission-controller-69db94689b-4nblp" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912599 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/41890204-5a81-44d5-99eb-82be690cc03d-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-jtjg7\" (UID: \"41890204-5a81-44d5-99eb-82be690cc03d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912615 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee6f605d-f530-448b-825d-cd7dedd4c632-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-v8bxf\" (UID: \"ee6f605d-f530-448b-825d-cd7dedd4c632\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912637 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87300cd0-fd46-44e7-9925-c8cf3322b686-trusted-ca\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912654 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee6f605d-f530-448b-825d-cd7dedd4c632-config\") pod \"kube-controller-manager-operator-69d5f845f8-v8bxf\" (UID: \"ee6f605d-f530-448b-825d-cd7dedd4c632\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912700 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3d4e952f-6916-451e-aa61-2975f38fa7f4-tmpfs\") pod \"olm-operator-5cdf44d969-7skgn\" (UID: \"3d4e952f-6916-451e-aa61-2975f38fa7f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912721 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6dc95aca-746a-4087-9143-d5e3591eb687-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-v56j9\" (UID: \"6dc95aca-746a-4087-9143-d5e3591eb687\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912809 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/467a9df6-b909-453f-8a3a-8fec3fd4b54f-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-qnk64\" (UID: \"467a9df6-b909-453f-8a3a-8fec3fd4b54f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.912931 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74f90e1e-39f6-4433-9dd0-82e74f7b2b0f-metrics-tls\") pod \"dns-default-rlhjs\" (UID: \"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f\") " pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.913200 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87300cd0-fd46-44e7-9925-c8cf3322b686-installation-pull-secrets\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.913269 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-bound-sa-token\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.913519 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z657h\" (UniqueName: \"kubernetes.io/projected/35aac665-ec48-4b0f-8e0a-b5b8b173ca1e-kube-api-access-z657h\") pod \"machine-config-controller-f9cdd68f7-sz7rz\" (UID: \"35aac665-ec48-4b0f-8e0a-b5b8b173ca1e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.913592 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87300cd0-fd46-44e7-9925-c8cf3322b686-ca-trust-extracted\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.913620 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-tls\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.913867 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914037 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zll2b\" (UniqueName: \"kubernetes.io/projected/1db49e24-69a6-47c8-b689-df0b1754efac-kube-api-access-zll2b\") pod \"dns-operator-799b87ffcd-cmps6\" (UID: \"1db49e24-69a6-47c8-b689-df0b1754efac\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914089 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac816b90-036c-4024-a177-f6e32b250393-serving-cert\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914131 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn6pz\" (UniqueName: \"kubernetes.io/projected/76704edc-840b-442f-8f26-4a5c394e5e4f-kube-api-access-hn6pz\") pod \"kube-storage-version-migrator-operator-565b79b866-q57sn\" (UID: \"76704edc-840b-442f-8f26-4a5c394e5e4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914155 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-registration-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914206 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87300cd0-fd46-44e7-9925-c8cf3322b686-trusted-ca\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914264 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s78v\" (UniqueName: \"kubernetes.io/projected/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-kube-api-access-2s78v\") pod \"marketplace-operator-547dbd544d-2qnd9\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914323 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5d19b96-2e13-4b63-9055-7f8c8eb785fb-apiservice-cert\") pod \"packageserver-7d4fc7d867-v88kj\" (UID: \"a5d19b96-2e13-4b63-9055-7f8c8eb785fb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914362 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-serving-cert\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914429 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpbqn\" (UniqueName: \"kubernetes.io/projected/467a9df6-b909-453f-8a3a-8fec3fd4b54f-kube-api-access-hpbqn\") pod \"catalog-operator-75ff9f647d-qnk64\" (UID: \"467a9df6-b909-453f-8a3a-8fec3fd4b54f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914581 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d70d7f9-c575-48ab-ad97-52b038fbe2c6-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-gnb6x\" (UID: \"3d70d7f9-c575-48ab-ad97-52b038fbe2c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914653 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2a79044c-9f1f-4d59-8e63-e138868ebdd2-audit-dir\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914727 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-2qnd9\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914743 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2a79044c-9f1f-4d59-8e63-e138868ebdd2-audit-dir\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914755 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee6f605d-f530-448b-825d-cd7dedd4c632-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-v8bxf\" (UID: \"ee6f605d-f530-448b-825d-cd7dedd4c632\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914804 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff561547-8b9b-476c-8bc4-2fe57d56bcc0-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-bvnd7\" (UID: \"ff561547-8b9b-476c-8bc4-2fe57d56bcc0\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914841 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4df8836-f797-48b9-905f-5790efb2e6af-trusted-ca\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914892 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-metrics-certs\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914914 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4rmj\" (UniqueName: \"kubernetes.io/projected/ac816b90-036c-4024-a177-f6e32b250393-kube-api-access-w4rmj\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914933 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74f90e1e-39f6-4433-9dd0-82e74f7b2b0f-config-volume\") pod \"dns-default-rlhjs\" (UID: \"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f\") " pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914953 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wl8dl\" (UniqueName: \"kubernetes.io/projected/2a79044c-9f1f-4d59-8e63-e138868ebdd2-kube-api-access-wl8dl\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.914973 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-service-ca-bundle\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915021 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-etcd-service-ca\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915058 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2cp6\" (UniqueName: \"kubernetes.io/projected/1c12c797-4410-4399-8ea4-138a53a8ef49-kube-api-access-b2cp6\") pod \"collect-profiles-29425800-ks4pt\" (UID: \"1c12c797-4410-4399-8ea4-138a53a8ef49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915091 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-certificates\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915093 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-etcd-client\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915175 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncrt5\" (UniqueName: \"kubernetes.io/projected/41890204-5a81-44d5-99eb-82be690cc03d-kube-api-access-ncrt5\") pod \"machine-config-operator-67c9d58cbb-jtjg7\" (UID: \"41890204-5a81-44d5-99eb-82be690cc03d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915194 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ee6f605d-f530-448b-825d-cd7dedd4c632-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-v8bxf\" (UID: \"ee6f605d-f530-448b-825d-cd7dedd4c632\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915216 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ac816b90-036c-4024-a177-f6e32b250393-tmp-dir\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915235 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-tmp\") pod \"marketplace-operator-547dbd544d-2qnd9\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915254 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1779ada-55ca-4d92-acd1-7a2a9ff46b6f-cert\") pod \"ingress-canary-brp2v\" (UID: \"b1779ada-55ca-4d92-acd1-7a2a9ff46b6f\") " pod="openshift-ingress-canary/ingress-canary-brp2v" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915329 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5d332a03-7aae-43d2-b644-bdffb3c9b992-signing-cabundle\") pod \"service-ca-74545575db-zqpbf\" (UID: \"5d332a03-7aae-43d2-b644-bdffb3c9b992\") " pod="openshift-service-ca/service-ca-74545575db-zqpbf" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915410 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-audit-policies\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915446 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/33e71d64-89ca-44fa-9941-337b91d25c4f-certs\") pod \"machine-config-server-5cr2b\" (UID: \"33e71d64-89ca-44fa-9941-337b91d25c4f\") " pod="openshift-machine-config-operator/machine-config-server-5cr2b" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915476 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrj8w\" (UniqueName: \"kubernetes.io/projected/a5d19b96-2e13-4b63-9055-7f8c8eb785fb-kube-api-access-mrj8w\") pod \"packageserver-7d4fc7d867-v88kj\" (UID: \"a5d19b96-2e13-4b63-9055-7f8c8eb785fb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915501 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de734fb8-e662-4172-b64e-57bb9b51c606-serving-cert\") pod \"service-ca-operator-5b9c976747-4gtx7\" (UID: \"de734fb8-e662-4172-b64e-57bb9b51c606\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915521 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvscm\" (UniqueName: \"kubernetes.io/projected/b8a7731c-7bfe-4a87-bfec-8e24fc0ab258-kube-api-access-dvscm\") pod \"package-server-manager-77f986bd66-q2zlc\" (UID: \"b8a7731c-7bfe-4a87-bfec-8e24fc0ab258\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915543 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-default-certificate\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915569 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4df8836-f797-48b9-905f-5790efb2e6af-serving-cert\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915589 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac816b90-036c-4024-a177-f6e32b250393-etcd-client\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915632 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76704edc-840b-442f-8f26-4a5c394e5e4f-config\") pod \"kube-storage-version-migrator-operator-565b79b866-q57sn\" (UID: \"76704edc-840b-442f-8f26-4a5c394e5e4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915698 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c12c797-4410-4399-8ea4-138a53a8ef49-config-volume\") pod \"collect-profiles-29425800-ks4pt\" (UID: \"1c12c797-4410-4399-8ea4-138a53a8ef49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915723 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-config\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915742 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1db49e24-69a6-47c8-b689-df0b1754efac-tmp-dir\") pod \"dns-operator-799b87ffcd-cmps6\" (UID: \"1db49e24-69a6-47c8-b689-df0b1754efac\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915758 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/41890204-5a81-44d5-99eb-82be690cc03d-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-jtjg7\" (UID: \"41890204-5a81-44d5-99eb-82be690cc03d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915775 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d70d7f9-c575-48ab-ad97-52b038fbe2c6-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-gnb6x\" (UID: \"3d70d7f9-c575-48ab-ad97-52b038fbe2c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915796 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rlsj9\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915819 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6qb8t\" (UniqueName: \"kubernetes.io/projected/d4df8836-f797-48b9-905f-5790efb2e6af-kube-api-access-6qb8t\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915835 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d70d7f9-c575-48ab-ad97-52b038fbe2c6-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-gnb6x\" (UID: \"3d70d7f9-c575-48ab-ad97-52b038fbe2c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915852 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d70d7f9-c575-48ab-ad97-52b038fbe2c6-config\") pod \"openshift-kube-scheduler-operator-54f497555d-gnb6x\" (UID: \"3d70d7f9-c575-48ab-ad97-52b038fbe2c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915866 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4slng\" (UniqueName: \"kubernetes.io/projected/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-kube-api-access-4slng\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915884 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a7731c-7bfe-4a87-bfec-8e24fc0ab258-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-q2zlc\" (UID: \"b8a7731c-7bfe-4a87-bfec-8e24fc0ab258\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.915902 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmhgv\" (UniqueName: \"kubernetes.io/projected/b75fe240-84aa-4d9f-be64-7f2727566095-kube-api-access-zmhgv\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.926076 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.948575 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.967007 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:17 crc kubenswrapper[5113]: I1212 14:12:17.987690 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.006513 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.018498 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.018750 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:18.518717931 +0000 UTC m=+121.353967758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.018889 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac816b90-036c-4024-a177-f6e32b250393-serving-cert\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.018937 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hn6pz\" (UniqueName: \"kubernetes.io/projected/76704edc-840b-442f-8f26-4a5c394e5e4f-kube-api-access-hn6pz\") pod \"kube-storage-version-migrator-operator-565b79b866-q57sn\" (UID: \"76704edc-840b-442f-8f26-4a5c394e5e4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.018966 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-registration-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019014 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2s78v\" (UniqueName: \"kubernetes.io/projected/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-kube-api-access-2s78v\") pod \"marketplace-operator-547dbd544d-2qnd9\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019045 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5d19b96-2e13-4b63-9055-7f8c8eb785fb-apiservice-cert\") pod \"packageserver-7d4fc7d867-v88kj\" (UID: \"a5d19b96-2e13-4b63-9055-7f8c8eb785fb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019082 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hpbqn\" (UniqueName: \"kubernetes.io/projected/467a9df6-b909-453f-8a3a-8fec3fd4b54f-kube-api-access-hpbqn\") pod \"catalog-operator-75ff9f647d-qnk64\" (UID: \"467a9df6-b909-453f-8a3a-8fec3fd4b54f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019107 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d70d7f9-c575-48ab-ad97-52b038fbe2c6-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-gnb6x\" (UID: \"3d70d7f9-c575-48ab-ad97-52b038fbe2c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019318 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-2qnd9\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019387 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee6f605d-f530-448b-825d-cd7dedd4c632-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-v8bxf\" (UID: \"ee6f605d-f530-448b-825d-cd7dedd4c632\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019413 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-registration-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019419 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff561547-8b9b-476c-8bc4-2fe57d56bcc0-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-bvnd7\" (UID: \"ff561547-8b9b-476c-8bc4-2fe57d56bcc0\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019542 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-metrics-certs\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019600 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4rmj\" (UniqueName: \"kubernetes.io/projected/ac816b90-036c-4024-a177-f6e32b250393-kube-api-access-w4rmj\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019620 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74f90e1e-39f6-4433-9dd0-82e74f7b2b0f-config-volume\") pod \"dns-default-rlhjs\" (UID: \"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f\") " pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019645 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-service-ca-bundle\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019876 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-etcd-service-ca\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019927 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b2cp6\" (UniqueName: \"kubernetes.io/projected/1c12c797-4410-4399-8ea4-138a53a8ef49-kube-api-access-b2cp6\") pod \"collect-profiles-29425800-ks4pt\" (UID: \"1c12c797-4410-4399-8ea4-138a53a8ef49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.019975 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrt5\" (UniqueName: \"kubernetes.io/projected/41890204-5a81-44d5-99eb-82be690cc03d-kube-api-access-ncrt5\") pod \"machine-config-operator-67c9d58cbb-jtjg7\" (UID: \"41890204-5a81-44d5-99eb-82be690cc03d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020005 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ee6f605d-f530-448b-825d-cd7dedd4c632-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-v8bxf\" (UID: \"ee6f605d-f530-448b-825d-cd7dedd4c632\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020033 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ac816b90-036c-4024-a177-f6e32b250393-tmp-dir\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020059 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-tmp\") pod \"marketplace-operator-547dbd544d-2qnd9\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020141 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1779ada-55ca-4d92-acd1-7a2a9ff46b6f-cert\") pod \"ingress-canary-brp2v\" (UID: \"b1779ada-55ca-4d92-acd1-7a2a9ff46b6f\") " pod="openshift-ingress-canary/ingress-canary-brp2v" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020221 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5d332a03-7aae-43d2-b644-bdffb3c9b992-signing-cabundle\") pod \"service-ca-74545575db-zqpbf\" (UID: \"5d332a03-7aae-43d2-b644-bdffb3c9b992\") " pod="openshift-service-ca/service-ca-74545575db-zqpbf" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020272 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/33e71d64-89ca-44fa-9941-337b91d25c4f-certs\") pod \"machine-config-server-5cr2b\" (UID: \"33e71d64-89ca-44fa-9941-337b91d25c4f\") " pod="openshift-machine-config-operator/machine-config-server-5cr2b" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020306 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mrj8w\" (UniqueName: \"kubernetes.io/projected/a5d19b96-2e13-4b63-9055-7f8c8eb785fb-kube-api-access-mrj8w\") pod \"packageserver-7d4fc7d867-v88kj\" (UID: \"a5d19b96-2e13-4b63-9055-7f8c8eb785fb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020336 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de734fb8-e662-4172-b64e-57bb9b51c606-serving-cert\") pod \"service-ca-operator-5b9c976747-4gtx7\" (UID: \"de734fb8-e662-4172-b64e-57bb9b51c606\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020368 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dvscm\" (UniqueName: \"kubernetes.io/projected/b8a7731c-7bfe-4a87-bfec-8e24fc0ab258-kube-api-access-dvscm\") pod \"package-server-manager-77f986bd66-q2zlc\" (UID: \"b8a7731c-7bfe-4a87-bfec-8e24fc0ab258\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020396 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-default-certificate\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020422 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac816b90-036c-4024-a177-f6e32b250393-etcd-client\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020444 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76704edc-840b-442f-8f26-4a5c394e5e4f-config\") pod \"kube-storage-version-migrator-operator-565b79b866-q57sn\" (UID: \"76704edc-840b-442f-8f26-4a5c394e5e4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020477 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c12c797-4410-4399-8ea4-138a53a8ef49-config-volume\") pod \"collect-profiles-29425800-ks4pt\" (UID: \"1c12c797-4410-4399-8ea4-138a53a8ef49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020505 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-config\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020543 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ac816b90-036c-4024-a177-f6e32b250393-tmp-dir\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020588 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-tmp\") pod \"marketplace-operator-547dbd544d-2qnd9\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020656 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1db49e24-69a6-47c8-b689-df0b1754efac-tmp-dir\") pod \"dns-operator-799b87ffcd-cmps6\" (UID: \"1db49e24-69a6-47c8-b689-df0b1754efac\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020686 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-service-ca-bundle\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020695 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/41890204-5a81-44d5-99eb-82be690cc03d-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-jtjg7\" (UID: \"41890204-5a81-44d5-99eb-82be690cc03d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020737 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d70d7f9-c575-48ab-ad97-52b038fbe2c6-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-gnb6x\" (UID: \"3d70d7f9-c575-48ab-ad97-52b038fbe2c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020773 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rlsj9\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020803 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d70d7f9-c575-48ab-ad97-52b038fbe2c6-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-gnb6x\" (UID: \"3d70d7f9-c575-48ab-ad97-52b038fbe2c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020829 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d70d7f9-c575-48ab-ad97-52b038fbe2c6-config\") pod \"openshift-kube-scheduler-operator-54f497555d-gnb6x\" (UID: \"3d70d7f9-c575-48ab-ad97-52b038fbe2c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020854 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4slng\" (UniqueName: \"kubernetes.io/projected/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-kube-api-access-4slng\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020883 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a7731c-7bfe-4a87-bfec-8e24fc0ab258-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-q2zlc\" (UID: \"b8a7731c-7bfe-4a87-bfec-8e24fc0ab258\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020909 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zmhgv\" (UniqueName: \"kubernetes.io/projected/b75fe240-84aa-4d9f-be64-7f2727566095-kube-api-access-zmhgv\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020932 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ee6f605d-f530-448b-825d-cd7dedd4c632-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-v8bxf\" (UID: \"ee6f605d-f530-448b-825d-cd7dedd4c632\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.020936 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rhjxh\" (UniqueName: \"kubernetes.io/projected/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-kube-api-access-rhjxh\") pod \"cni-sysctl-allowlist-ds-rlsj9\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021015 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-etcd-ca\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021037 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1db49e24-69a6-47c8-b689-df0b1754efac-tmp-dir\") pod \"dns-operator-799b87ffcd-cmps6\" (UID: \"1db49e24-69a6-47c8-b689-df0b1754efac\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021059 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/41890204-5a81-44d5-99eb-82be690cc03d-images\") pod \"machine-config-operator-67c9d58cbb-jtjg7\" (UID: \"41890204-5a81-44d5-99eb-82be690cc03d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021096 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qmscr\" (UniqueName: \"kubernetes.io/projected/66b65774-248e-407d-9e9d-c9f770175654-kube-api-access-qmscr\") pod \"multus-admission-controller-69db94689b-4nblp\" (UID: \"66b65774-248e-407d-9e9d-c9f770175654\") " pod="openshift-multus/multus-admission-controller-69db94689b-4nblp" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021133 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/74f90e1e-39f6-4433-9dd0-82e74f7b2b0f-tmp-dir\") pod \"dns-default-rlhjs\" (UID: \"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f\") " pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021168 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467a9df6-b909-453f-8a3a-8fec3fd4b54f-srv-cert\") pod \"catalog-operator-75ff9f647d-qnk64\" (UID: \"467a9df6-b909-453f-8a3a-8fec3fd4b54f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021226 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d70d7f9-c575-48ab-ad97-52b038fbe2c6-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-gnb6x\" (UID: \"3d70d7f9-c575-48ab-ad97-52b038fbe2c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021393 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rlsj9\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021396 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zcw5\" (UniqueName: \"kubernetes.io/projected/7cedf898-adef-4e24-9d76-b6a19006883c-kube-api-access-2zcw5\") pod \"migrator-866fcbc849-n69j2\" (UID: \"7cedf898-adef-4e24-9d76-b6a19006883c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-n69j2" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021458 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r2zcr\" (UniqueName: \"kubernetes.io/projected/b1779ada-55ca-4d92-acd1-7a2a9ff46b6f-kube-api-access-r2zcr\") pod \"ingress-canary-brp2v\" (UID: \"b1779ada-55ca-4d92-acd1-7a2a9ff46b6f\") " pod="openshift-ingress-canary/ingress-canary-brp2v" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021495 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/35aac665-ec48-4b0f-8e0a-b5b8b173ca1e-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-sz7rz\" (UID: \"35aac665-ec48-4b0f-8e0a-b5b8b173ca1e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021521 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/74f90e1e-39f6-4433-9dd0-82e74f7b2b0f-tmp-dir\") pod \"dns-default-rlhjs\" (UID: \"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f\") " pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021536 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-899rn\" (UniqueName: \"kubernetes.io/projected/de734fb8-e662-4172-b64e-57bb9b51c606-kube-api-access-899rn\") pod \"service-ca-operator-5b9c976747-4gtx7\" (UID: \"de734fb8-e662-4172-b64e-57bb9b51c606\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021564 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff561547-8b9b-476c-8bc4-2fe57d56bcc0-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-bvnd7\" (UID: \"ff561547-8b9b-476c-8bc4-2fe57d56bcc0\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021583 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d4e952f-6916-451e-aa61-2975f38fa7f4-srv-cert\") pod \"olm-operator-5cdf44d969-7skgn\" (UID: \"3d4e952f-6916-451e-aa61-2975f38fa7f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021603 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76704edc-840b-442f-8f26-4a5c394e5e4f-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-q57sn\" (UID: \"76704edc-840b-442f-8f26-4a5c394e5e4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021621 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ctd7j\" (UniqueName: \"kubernetes.io/projected/6dc95aca-746a-4087-9143-d5e3591eb687-kube-api-access-ctd7j\") pod \"control-plane-machine-set-operator-75ffdb6fcd-v56j9\" (UID: \"6dc95aca-746a-4087-9143-d5e3591eb687\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021651 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/33e71d64-89ca-44fa-9941-337b91d25c4f-node-bootstrap-token\") pod \"machine-config-server-5cr2b\" (UID: \"33e71d64-89ca-44fa-9941-337b91d25c4f\") " pod="openshift-machine-config-operator/machine-config-server-5cr2b" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021673 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5d19b96-2e13-4b63-9055-7f8c8eb785fb-webhook-cert\") pod \"packageserver-7d4fc7d867-v88kj\" (UID: \"a5d19b96-2e13-4b63-9055-7f8c8eb785fb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021704 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021724 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7zjdv\" (UniqueName: \"kubernetes.io/projected/33e71d64-89ca-44fa-9941-337b91d25c4f-kube-api-access-7zjdv\") pod \"machine-config-server-5cr2b\" (UID: \"33e71d64-89ca-44fa-9941-337b91d25c4f\") " pod="openshift-machine-config-operator/machine-config-server-5cr2b" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021747 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mlwnq\" (UniqueName: \"kubernetes.io/projected/5d332a03-7aae-43d2-b644-bdffb3c9b992-kube-api-access-mlwnq\") pod \"service-ca-74545575db-zqpbf\" (UID: \"5d332a03-7aae-43d2-b644-bdffb3c9b992\") " pod="openshift-service-ca/service-ca-74545575db-zqpbf" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021777 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/467a9df6-b909-453f-8a3a-8fec3fd4b54f-tmpfs\") pod \"catalog-operator-75ff9f647d-qnk64\" (UID: \"467a9df6-b909-453f-8a3a-8fec3fd4b54f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021797 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-2qnd9\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021814 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-ready\") pod \"cni-sysctl-allowlist-ds-rlsj9\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021832 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-socket-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021848 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-stats-auth\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021867 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v495z\" (UniqueName: \"kubernetes.io/projected/ff561547-8b9b-476c-8bc4-2fe57d56bcc0-kube-api-access-v495z\") pod \"ingress-operator-6b9cb4dbcf-bvnd7\" (UID: \"ff561547-8b9b-476c-8bc4-2fe57d56bcc0\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021870 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/41890204-5a81-44d5-99eb-82be690cc03d-images\") pod \"machine-config-operator-67c9d58cbb-jtjg7\" (UID: \"41890204-5a81-44d5-99eb-82be690cc03d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021921 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fzh66\" (UniqueName: \"kubernetes.io/projected/74f90e1e-39f6-4433-9dd0-82e74f7b2b0f-kube-api-access-fzh66\") pod \"dns-default-rlhjs\" (UID: \"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f\") " pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021977 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d4e952f-6916-451e-aa61-2975f38fa7f4-profile-collector-cert\") pod \"olm-operator-5cdf44d969-7skgn\" (UID: \"3d4e952f-6916-451e-aa61-2975f38fa7f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021993 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c12c797-4410-4399-8ea4-138a53a8ef49-config-volume\") pod \"collect-profiles-29425800-ks4pt\" (UID: \"1c12c797-4410-4399-8ea4-138a53a8ef49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.021998 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de734fb8-e662-4172-b64e-57bb9b51c606-config\") pod \"service-ca-operator-5b9c976747-4gtx7\" (UID: \"de734fb8-e662-4172-b64e-57bb9b51c606\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.022050 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rlsj9\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.022075 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-plugins-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.022103 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kh5ws\" (UniqueName: \"kubernetes.io/projected/3d4e952f-6916-451e-aa61-2975f38fa7f4-kube-api-access-kh5ws\") pod \"olm-operator-5cdf44d969-7skgn\" (UID: \"3d4e952f-6916-451e-aa61-2975f38fa7f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.022156 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1db49e24-69a6-47c8-b689-df0b1754efac-metrics-tls\") pod \"dns-operator-799b87ffcd-cmps6\" (UID: \"1db49e24-69a6-47c8-b689-df0b1754efac\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.022184 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c12c797-4410-4399-8ea4-138a53a8ef49-secret-volume\") pod \"collect-profiles-29425800-ks4pt\" (UID: \"1c12c797-4410-4399-8ea4-138a53a8ef49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.022406 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:18.522392391 +0000 UTC m=+121.357642218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.022464 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/35aac665-ec48-4b0f-8e0a-b5b8b173ca1e-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-sz7rz\" (UID: \"35aac665-ec48-4b0f-8e0a-b5b8b173ca1e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.022566 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-plugins-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.022960 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-ready\") pod \"cni-sysctl-allowlist-ds-rlsj9\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.022993 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-socket-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.023550 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/467a9df6-b909-453f-8a3a-8fec3fd4b54f-tmpfs\") pod \"catalog-operator-75ff9f647d-qnk64\" (UID: \"467a9df6-b909-453f-8a3a-8fec3fd4b54f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.027075 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/41890204-5a81-44d5-99eb-82be690cc03d-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-jtjg7\" (UID: \"41890204-5a81-44d5-99eb-82be690cc03d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.027319 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5d332a03-7aae-43d2-b644-bdffb3c9b992-signing-key\") pod \"service-ca-74545575db-zqpbf\" (UID: \"5d332a03-7aae-43d2-b644-bdffb3c9b992\") " pod="openshift-service-ca/service-ca-74545575db-zqpbf" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.027367 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a5d19b96-2e13-4b63-9055-7f8c8eb785fb-tmpfs\") pod \"packageserver-7d4fc7d867-v88kj\" (UID: \"a5d19b96-2e13-4b63-9055-7f8c8eb785fb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.028590 5113 request.go:752] "Waited before sending request" delay="1.013607468s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&limit=500&resourceVersion=0" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.028822 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-mountpoint-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.028865 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-csi-data-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.028895 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff561547-8b9b-476c-8bc4-2fe57d56bcc0-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-bvnd7\" (UID: \"ff561547-8b9b-476c-8bc4-2fe57d56bcc0\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.028930 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c12c797-4410-4399-8ea4-138a53a8ef49-secret-volume\") pod \"collect-profiles-29425800-ks4pt\" (UID: \"1c12c797-4410-4399-8ea4-138a53a8ef49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.028953 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/35aac665-ec48-4b0f-8e0a-b5b8b173ca1e-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-sz7rz\" (UID: \"35aac665-ec48-4b0f-8e0a-b5b8b173ca1e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.028982 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/66b65774-248e-407d-9e9d-c9f770175654-webhook-certs\") pod \"multus-admission-controller-69db94689b-4nblp\" (UID: \"66b65774-248e-407d-9e9d-c9f770175654\") " pod="openshift-multus/multus-admission-controller-69db94689b-4nblp" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.028999 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-mountpoint-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029010 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/41890204-5a81-44d5-99eb-82be690cc03d-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-jtjg7\" (UID: \"41890204-5a81-44d5-99eb-82be690cc03d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029093 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee6f605d-f530-448b-825d-cd7dedd4c632-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-v8bxf\" (UID: \"ee6f605d-f530-448b-825d-cd7dedd4c632\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029155 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b75fe240-84aa-4d9f-be64-7f2727566095-csi-data-dir\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029199 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee6f605d-f530-448b-825d-cd7dedd4c632-config\") pod \"kube-controller-manager-operator-69d5f845f8-v8bxf\" (UID: \"ee6f605d-f530-448b-825d-cd7dedd4c632\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029281 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3d4e952f-6916-451e-aa61-2975f38fa7f4-tmpfs\") pod \"olm-operator-5cdf44d969-7skgn\" (UID: \"3d4e952f-6916-451e-aa61-2975f38fa7f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029318 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6dc95aca-746a-4087-9143-d5e3591eb687-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-v56j9\" (UID: \"6dc95aca-746a-4087-9143-d5e3591eb687\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029281 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a5d19b96-2e13-4b63-9055-7f8c8eb785fb-tmpfs\") pod \"packageserver-7d4fc7d867-v88kj\" (UID: \"a5d19b96-2e13-4b63-9055-7f8c8eb785fb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029401 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/467a9df6-b909-453f-8a3a-8fec3fd4b54f-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-qnk64\" (UID: \"467a9df6-b909-453f-8a3a-8fec3fd4b54f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029441 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74f90e1e-39f6-4433-9dd0-82e74f7b2b0f-metrics-tls\") pod \"dns-default-rlhjs\" (UID: \"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f\") " pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029508 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z657h\" (UniqueName: \"kubernetes.io/projected/35aac665-ec48-4b0f-8e0a-b5b8b173ca1e-kube-api-access-z657h\") pod \"machine-config-controller-f9cdd68f7-sz7rz\" (UID: \"35aac665-ec48-4b0f-8e0a-b5b8b173ca1e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029553 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zll2b\" (UniqueName: \"kubernetes.io/projected/1db49e24-69a6-47c8-b689-df0b1754efac-kube-api-access-zll2b\") pod \"dns-operator-799b87ffcd-cmps6\" (UID: \"1db49e24-69a6-47c8-b689-df0b1754efac\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029703 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3d4e952f-6916-451e-aa61-2975f38fa7f4-tmpfs\") pod \"olm-operator-5cdf44d969-7skgn\" (UID: \"3d4e952f-6916-451e-aa61-2975f38fa7f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029771 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/41890204-5a81-44d5-99eb-82be690cc03d-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-jtjg7\" (UID: \"41890204-5a81-44d5-99eb-82be690cc03d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.029828 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d70d7f9-c575-48ab-ad97-52b038fbe2c6-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-gnb6x\" (UID: \"3d70d7f9-c575-48ab-ad97-52b038fbe2c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.030076 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d4e952f-6916-451e-aa61-2975f38fa7f4-profile-collector-cert\") pod \"olm-operator-5cdf44d969-7skgn\" (UID: \"3d4e952f-6916-451e-aa61-2975f38fa7f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.030140 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1db49e24-69a6-47c8-b689-df0b1754efac-metrics-tls\") pod \"dns-operator-799b87ffcd-cmps6\" (UID: \"1db49e24-69a6-47c8-b689-df0b1754efac\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.030527 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.031856 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d70d7f9-c575-48ab-ad97-52b038fbe2c6-config\") pod \"openshift-kube-scheduler-operator-54f497555d-gnb6x\" (UID: \"3d70d7f9-c575-48ab-ad97-52b038fbe2c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.033057 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff561547-8b9b-476c-8bc4-2fe57d56bcc0-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-bvnd7\" (UID: \"ff561547-8b9b-476c-8bc4-2fe57d56bcc0\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.035077 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-metrics-certs\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.036330 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6dc95aca-746a-4087-9143-d5e3591eb687-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-v56j9\" (UID: \"6dc95aca-746a-4087-9143-d5e3591eb687\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.036842 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-stats-auth\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.038763 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff561547-8b9b-476c-8bc4-2fe57d56bcc0-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-bvnd7\" (UID: \"ff561547-8b9b-476c-8bc4-2fe57d56bcc0\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.045840 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-default-certificate\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.047365 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.048296 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/467a9df6-b909-453f-8a3a-8fec3fd4b54f-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-qnk64\" (UID: \"467a9df6-b909-453f-8a3a-8fec3fd4b54f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.065712 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.086436 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.093947 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee6f605d-f530-448b-825d-cd7dedd4c632-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-v8bxf\" (UID: \"ee6f605d-f530-448b-825d-cd7dedd4c632\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.105954 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.110527 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee6f605d-f530-448b-825d-cd7dedd4c632-config\") pod \"kube-controller-manager-operator-69d5f845f8-v8bxf\" (UID: \"ee6f605d-f530-448b-825d-cd7dedd4c632\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.126156 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.130602 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.130892 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:18.630861158 +0000 UTC m=+121.466110985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.131616 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.132050 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:18.632027626 +0000 UTC m=+121.467277463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.146994 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.152930 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/35aac665-ec48-4b0f-8e0a-b5b8b173ca1e-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-sz7rz\" (UID: \"35aac665-ec48-4b0f-8e0a-b5b8b173ca1e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.167665 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.181075 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d4e952f-6916-451e-aa61-2975f38fa7f4-srv-cert\") pod \"olm-operator-5cdf44d969-7skgn\" (UID: \"3d4e952f-6916-451e-aa61-2975f38fa7f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.185975 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.206175 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.214640 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" event={"ID":"ec404d0b-005f-4f01-80db-d36605948e5c","Type":"ContainerStarted","Data":"fae526a711fa062b32fdd7f3fb87b51a568b046de252e0f347e69074cf5e32f9"} Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.214907 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.216676 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" event={"ID":"c55aed1a-bd22-4591-9394-247b0dbca87d","Type":"ContainerStarted","Data":"bf8108b919ee02d8d18685ab371f7485d71576379cf60348ea609c707f03be4a"} Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.216711 5113 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-c6kfp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.216715 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" event={"ID":"c55aed1a-bd22-4591-9394-247b0dbca87d","Type":"ContainerStarted","Data":"92bd33b4313f287cc063de0fbe6c6c812e4a115dc3df17dd810b55d1dfd99351"} Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.216758 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" podUID="ec404d0b-005f-4f01-80db-d36605948e5c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.216904 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.218192 5113 generic.go:358] "Generic (PLEG): container finished" podID="610b31b0-b3fb-4878-be26-48601b1cb9d0" containerID="7384f0bbf5d7703d39085a0f7e056ec78ad5f6f50bba4cd531b862d4f48d6fc6" exitCode=0 Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.218279 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" event={"ID":"610b31b0-b3fb-4878-be26-48601b1cb9d0","Type":"ContainerDied","Data":"7384f0bbf5d7703d39085a0f7e056ec78ad5f6f50bba4cd531b862d4f48d6fc6"} Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.218304 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" event={"ID":"610b31b0-b3fb-4878-be26-48601b1cb9d0","Type":"ContainerStarted","Data":"acee7daa66044ce553274ec4ff58e675e4bc6e6e29b5e07e7e1a5e750e06cee5"} Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.218579 5113 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-2t5sb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.218617 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" podUID="c55aed1a-bd22-4591-9394-247b0dbca87d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.220015 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" event={"ID":"f07154ae-c1f9-42af-9327-e211d7199c82","Type":"ContainerStarted","Data":"8f0433fc7837b97c7eb6ae75c689bcc6b094dedee723959463aec7c55bcff28b"} Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.220040 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" event={"ID":"f07154ae-c1f9-42af-9327-e211d7199c82","Type":"ContainerStarted","Data":"b7e2b9f82bef06388beef1d226a2b4c6cb3aa9b23bebadc36b56e6f5c7311f2e"} Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.220344 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.221931 5113 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-wh8pj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.221971 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" podUID="f07154ae-c1f9-42af-9327-e211d7199c82" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.223099 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.225431 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.226626 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1b9ea27857a99679db2fd57e2d2ab8a18cb0ceb4dbb017a67fa058f8afd2f605"} Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.229524 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" event={"ID":"2009232f-996d-48ae-a590-9381aaea1db9","Type":"ContainerStarted","Data":"819d87705259616c4f51cfe2019376fc6cb9b27c56dccc1f7d500cbbab5b4d30"} Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.231168 5113 generic.go:358] "Generic (PLEG): container finished" podID="ebccfef3-840e-48d6-9da8-c61502a955fa" containerID="257640ed36d56b35021e7a442e93e1391b69ca21280525292bd8fab193d5053a" exitCode=0 Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.231258 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" event={"ID":"ebccfef3-840e-48d6-9da8-c61502a955fa","Type":"ContainerDied","Data":"257640ed36d56b35021e7a442e93e1391b69ca21280525292bd8fab193d5053a"} Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.232486 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.232683 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:18.732663597 +0000 UTC m=+121.567913424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.233027 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.233680 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:18.73365916 +0000 UTC m=+121.568909057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.234820 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" event={"ID":"4375bb7b-b15a-46f7-b092-fbd3c7a247fb","Type":"ContainerStarted","Data":"68268ab3a82202b35cbbb9b456cf0c691cfcc2975d9e4765df974339cab4cdbc"} Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.234873 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" event={"ID":"4375bb7b-b15a-46f7-b092-fbd3c7a247fb","Type":"ContainerStarted","Data":"1e84a058ffc89a7ef5428c4c4c4059e7d9dc101cc633e8e422895da657de6339"} Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.234883 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" event={"ID":"4375bb7b-b15a-46f7-b092-fbd3c7a247fb","Type":"ContainerStarted","Data":"7e728ff663cd9ed1f7c7ee041f144d2b7103064d160856bda36241b1eb7ac108"} Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.253428 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.265706 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.265761 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de734fb8-e662-4172-b64e-57bb9b51c606-serving-cert\") pod \"service-ca-operator-5b9c976747-4gtx7\" (UID: \"de734fb8-e662-4172-b64e-57bb9b51c606\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.272986 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de734fb8-e662-4172-b64e-57bb9b51c606-config\") pod \"service-ca-operator-5b9c976747-4gtx7\" (UID: \"de734fb8-e662-4172-b64e-57bb9b51c606\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.286613 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.306160 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.312526 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5d332a03-7aae-43d2-b644-bdffb3c9b992-signing-cabundle\") pod \"service-ca-74545575db-zqpbf\" (UID: \"5d332a03-7aae-43d2-b644-bdffb3c9b992\") " pod="openshift-service-ca/service-ca-74545575db-zqpbf" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.326561 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.334191 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.334326 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:18.834298161 +0000 UTC m=+121.669547988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.334615 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.337562 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:18.837547418 +0000 UTC m=+121.672797245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.345270 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.366016 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.383834 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.435942 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.436233 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:18.936213774 +0000 UTC m=+121.771463591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.436769 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.438032 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:18.938020993 +0000 UTC m=+121.773270820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.449633 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5d332a03-7aae-43d2-b644-bdffb3c9b992-signing-key\") pod \"service-ca-74545575db-zqpbf\" (UID: \"5d332a03-7aae-43d2-b644-bdffb3c9b992\") " pod="openshift-service-ca/service-ca-74545575db-zqpbf" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.498221 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.499926 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.500254 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.501054 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.501201 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.505153 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.516029 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-2qnd9\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.531103 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467a9df6-b909-453f-8a3a-8fec3fd4b54f-srv-cert\") pod \"catalog-operator-75ff9f647d-qnk64\" (UID: \"467a9df6-b909-453f-8a3a-8fec3fd4b54f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.531544 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.542715 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5d19b96-2e13-4b63-9055-7f8c8eb785fb-webhook-cert\") pod \"packageserver-7d4fc7d867-v88kj\" (UID: \"a5d19b96-2e13-4b63-9055-7f8c8eb785fb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.543320 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.544054 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.04403774 +0000 UTC m=+121.879287567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.544104 5113 projected.go:289] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.551930 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5d19b96-2e13-4b63-9055-7f8c8eb785fb-apiservice-cert\") pod \"packageserver-7d4fc7d867-v88kj\" (UID: \"a5d19b96-2e13-4b63-9055-7f8c8eb785fb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.556686 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.567841 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.575010 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a7731c-7bfe-4a87-bfec-8e24fc0ab258-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-q2zlc\" (UID: \"b8a7731c-7bfe-4a87-bfec-8e24fc0ab258\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.588567 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-2qnd9\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.600251 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rttlg\" (UniqueName: \"kubernetes.io/projected/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-kube-api-access-rttlg\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.645104 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d3eb42d-abaa-44da-9509-6c37f51f2cd9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-9656x\" (UID: \"9d3eb42d-abaa-44da-9509-6c37f51f2cd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.646855 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.648235 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.148218117 +0000 UTC m=+121.983467944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.666532 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.675070 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/33e71d64-89ca-44fa-9941-337b91d25c4f-certs\") pod \"machine-config-server-5cr2b\" (UID: \"33e71d64-89ca-44fa-9941-337b91d25c4f\") " pod="openshift-machine-config-operator/machine-config-server-5cr2b" Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.685057 5113 projected.go:289] Couldn't get configMap openshift-console/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.685089 5113 projected.go:289] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.686759 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.699948 5113 projected.go:289] Couldn't get configMap openshift-cluster-machine-approver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.707636 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.718894 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/33e71d64-89ca-44fa-9941-337b91d25c4f-node-bootstrap-token\") pod \"machine-config-server-5cr2b\" (UID: \"33e71d64-89ca-44fa-9941-337b91d25c4f\") " pod="openshift-machine-config-operator/machine-config-server-5cr2b" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.726979 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.748565 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.751410 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.751759 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.251725242 +0000 UTC m=+122.086975069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.752157 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.752489 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.252481726 +0000 UTC m=+122.087731553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.755829 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/66b65774-248e-407d-9e9d-c9f770175654-webhook-certs\") pod \"multus-admission-controller-69db94689b-4nblp\" (UID: \"66b65774-248e-407d-9e9d-c9f770175654\") " pod="openshift-multus/multus-admission-controller-69db94689b-4nblp" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.766799 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.785950 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.806264 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.819349 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1779ada-55ca-4d92-acd1-7a2a9ff46b6f-cert\") pod \"ingress-canary-brp2v\" (UID: \"b1779ada-55ca-4d92-acd1-7a2a9ff46b6f\") " pod="openshift-ingress-canary/ingress-canary-brp2v" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.826048 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.847547 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.853663 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.853866 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.353829141 +0000 UTC m=+122.189078968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.854476 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74f90e1e-39f6-4433-9dd0-82e74f7b2b0f-config-volume\") pod \"dns-default-rlhjs\" (UID: \"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f\") " pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.854595 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.855108 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.355100293 +0000 UTC m=+122.190350120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.865444 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.886061 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.911044 5113 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.911179 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-encryption-config podName:2a79044c-9f1f-4d59-8e63-e138868ebdd2 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.411152056 +0000 UTC m=+122.246401883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-encryption-config") pod "apiserver-8596bd845d-jqdlj" (UID: "2a79044c-9f1f-4d59-8e63-e138868ebdd2") : failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.912077 5113 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.912187 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-etcd-serving-ca podName:2a79044c-9f1f-4d59-8e63-e138868ebdd2 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.412172529 +0000 UTC m=+122.247422356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-etcd-serving-ca") pod "apiserver-8596bd845d-jqdlj" (UID: "2a79044c-9f1f-4d59-8e63-e138868ebdd2") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.912220 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.912489 5113 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.912551 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4df8836-f797-48b9-905f-5790efb2e6af-config podName:d4df8836-f797-48b9-905f-5790efb2e6af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.412531251 +0000 UTC m=+122.247781078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d4df8836-f797-48b9-905f-5790efb2e6af-config") pod "console-operator-67c89758df-f4974" (UID: "d4df8836-f797-48b9-905f-5790efb2e6af") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.913331 5113 secret.go:189] Couldn't get secret openshift-image-registry/installation-pull-secrets: failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.913405 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87300cd0-fd46-44e7-9925-c8cf3322b686-installation-pull-secrets podName:87300cd0-fd46-44e7-9925-c8cf3322b686 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.413392299 +0000 UTC m=+122.248642196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/87300cd0-fd46-44e7-9925-c8cf3322b686-installation-pull-secrets") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.913825 5113 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.913863 5113 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-66587d64c8-cd7rw: failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.913904 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-tls podName:87300cd0-fd46-44e7-9925-c8cf3322b686 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.413892796 +0000 UTC m=+122.249142623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-tls") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.914673 5113 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.914732 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-trusted-ca-bundle podName:2a79044c-9f1f-4d59-8e63-e138868ebdd2 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.414721132 +0000 UTC m=+122.249971039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-trusted-ca-bundle") pod "apiserver-8596bd845d-jqdlj" (UID: "2a79044c-9f1f-4d59-8e63-e138868ebdd2") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.915409 5113 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.915442 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-etcd-client podName:2a79044c-9f1f-4d59-8e63-e138868ebdd2 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.415432586 +0000 UTC m=+122.250682483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-etcd-client") pod "apiserver-8596bd845d-jqdlj" (UID: "2a79044c-9f1f-4d59-8e63-e138868ebdd2") : failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.915462 5113 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.915485 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-serving-cert podName:2a79044c-9f1f-4d59-8e63-e138868ebdd2 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.415478457 +0000 UTC m=+122.250728374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-serving-cert") pod "apiserver-8596bd845d-jqdlj" (UID: "2a79044c-9f1f-4d59-8e63-e138868ebdd2") : failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.915507 5113 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.915536 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4df8836-f797-48b9-905f-5790efb2e6af-trusted-ca podName:d4df8836-f797-48b9-905f-5790efb2e6af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.415529089 +0000 UTC m=+122.250779006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/d4df8836-f797-48b9-905f-5790efb2e6af-trusted-ca") pod "console-operator-67c89758df-f4974" (UID: "d4df8836-f797-48b9-905f-5790efb2e6af") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.916671 5113 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.916817 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-audit-policies podName:2a79044c-9f1f-4d59-8e63-e138868ebdd2 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.416799651 +0000 UTC m=+122.252049478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-audit-policies") pod "apiserver-8596bd845d-jqdlj" (UID: "2a79044c-9f1f-4d59-8e63-e138868ebdd2") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.916675 5113 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.916993 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4df8836-f797-48b9-905f-5790efb2e6af-serving-cert podName:d4df8836-f797-48b9-905f-5790efb2e6af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.416983667 +0000 UTC m=+122.252233494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d4df8836-f797-48b9-905f-5790efb2e6af-serving-cert") pod "console-operator-67c89758df-f4974" (UID: "d4df8836-f797-48b9-905f-5790efb2e6af") : failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.917563 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74f90e1e-39f6-4433-9dd0-82e74f7b2b0f-metrics-tls\") pod \"dns-default-rlhjs\" (UID: \"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f\") " pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.926277 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.945424 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.955699 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.955871 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.455850818 +0000 UTC m=+122.291100665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.956389 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:18 crc kubenswrapper[5113]: E1212 14:12:18.956733 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.456722146 +0000 UTC m=+122.291971973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.964644 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 12 14:12:18 crc kubenswrapper[5113]: I1212 14:12:18.973980 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rlsj9\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.005579 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.019151 5113 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.019481 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac816b90-036c-4024-a177-f6e32b250393-serving-cert podName:ac816b90-036c-4024-a177-f6e32b250393 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.519456558 +0000 UTC m=+122.354706405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ac816b90-036c-4024-a177-f6e32b250393-serving-cert") pod "etcd-operator-69b85846b6-sntfz" (UID: "ac816b90-036c-4024-a177-f6e32b250393") : failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.020333 5113 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.020462 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-etcd-service-ca podName:ac816b90-036c-4024-a177-f6e32b250393 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.52043444 +0000 UTC m=+122.355684267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-etcd-service-ca") pod "etcd-operator-69b85846b6-sntfz" (UID: "ac816b90-036c-4024-a177-f6e32b250393") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.020646 5113 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.020788 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-config podName:ac816b90-036c-4024-a177-f6e32b250393 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.520775861 +0000 UTC m=+122.356025708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-config") pod "etcd-operator-69b85846b6-sntfz" (UID: "ac816b90-036c-4024-a177-f6e32b250393") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.020884 5113 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.020937 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac816b90-036c-4024-a177-f6e32b250393-etcd-client podName:ac816b90-036c-4024-a177-f6e32b250393 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.520925616 +0000 UTC m=+122.356175453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/ac816b90-036c-4024-a177-f6e32b250393-etcd-client") pod "etcd-operator-69b85846b6-sntfz" (UID: "ac816b90-036c-4024-a177-f6e32b250393") : failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.020892 5113 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.021175 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76704edc-840b-442f-8f26-4a5c394e5e4f-config podName:76704edc-840b-442f-8f26-4a5c394e5e4f nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.521163203 +0000 UTC m=+122.356413040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/76704edc-840b-442f-8f26-4a5c394e5e4f-config") pod "kube-storage-version-migrator-operator-565b79b866-q57sn" (UID: "76704edc-840b-442f-8f26-4a5c394e5e4f") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.021299 5113 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.021430 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-etcd-ca podName:ac816b90-036c-4024-a177-f6e32b250393 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.521417301 +0000 UTC m=+122.356667138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-etcd-ca") pod "etcd-operator-69b85846b6-sntfz" (UID: "ac816b90-036c-4024-a177-f6e32b250393") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.022721 5113 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.022891 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76704edc-840b-442f-8f26-4a5c394e5e4f-serving-cert podName:76704edc-840b-442f-8f26-4a5c394e5e4f nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.522868439 +0000 UTC m=+122.358118266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/76704edc-840b-442f-8f26-4a5c394e5e4f-serving-cert") pod "kube-storage-version-migrator-operator-565b79b866-q57sn" (UID: "76704edc-840b-442f-8f26-4a5c394e5e4f") : failed to sync secret cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.026527 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.044138 5113 request.go:752] "Waited before sending request" delay="1.367162199s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=36837" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.045838 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.057569 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.058443 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.558425152 +0000 UTC m=+122.393674979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.065572 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.085673 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.126424 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.126481 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5lvp\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-kube-api-access-v5lvp\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.146337 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.194827 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.195312 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.695294028 +0000 UTC m=+122.530543855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.195326 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.233545 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-bound-sa-token\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.296042 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.296682 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.796665813 +0000 UTC m=+122.631915640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.315478 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.315599 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.315833 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.316273 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.325595 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.325949 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.340759 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" event={"ID":"610b31b0-b3fb-4878-be26-48601b1cb9d0","Type":"ContainerStarted","Data":"e431d70de07eb6761781f13e0e7464faacd96bcd5a77a06157e768b945ad153c"} Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.341643 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.347631 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.350984 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" event={"ID":"ebccfef3-840e-48d6-9da8-c61502a955fa","Type":"ContainerStarted","Data":"89c9bca57ee38b397741f86d6146fa16e9097d843c164bf97a8a9a29e6735c20"} Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.356714 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.360153 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.392512 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.393263 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.398541 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.399110 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:19.899092462 +0000 UTC m=+122.734342289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.449285 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s78v\" (UniqueName: \"kubernetes.io/projected/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-kube-api-access-2s78v\") pod \"marketplace-operator-547dbd544d-2qnd9\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.482357 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpbqn\" (UniqueName: \"kubernetes.io/projected/467a9df6-b909-453f-8a3a-8fec3fd4b54f-kube-api-access-hpbqn\") pod \"catalog-operator-75ff9f647d-qnk64\" (UID: \"467a9df6-b909-453f-8a3a-8fec3fd4b54f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.482978 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d70d7f9-c575-48ab-ad97-52b038fbe2c6-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-gnb6x\" (UID: \"3d70d7f9-c575-48ab-ad97-52b038fbe2c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.500480 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.500676 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.000635473 +0000 UTC m=+122.835885310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.506843 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.506962 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-etcd-serving-ca\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.507114 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4df8836-f797-48b9-905f-5790efb2e6af-config\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.507267 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87300cd0-fd46-44e7-9925-c8cf3322b686-installation-pull-secrets\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.507317 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-tls\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.507341 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.507491 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-serving-cert\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.507629 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4df8836-f797-48b9-905f-5790efb2e6af-trusted-ca\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.507764 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-etcd-client\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.508043 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-audit-policies\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.508179 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4df8836-f797-48b9-905f-5790efb2e6af-serving-cert\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.508375 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-encryption-config\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.509480 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.009462952 +0000 UTC m=+122.844712989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.510403 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee6f605d-f530-448b-825d-cd7dedd4c632-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-v8bxf\" (UID: \"ee6f605d-f530-448b-825d-cd7dedd4c632\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.512950 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4df8836-f797-48b9-905f-5790efb2e6af-serving-cert\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.513263 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-etcd-serving-ca\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.513779 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-audit-policies\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.513866 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87300cd0-fd46-44e7-9925-c8cf3322b686-installation-pull-secrets\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.514522 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a79044c-9f1f-4d59-8e63-e138868ebdd2-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.515592 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4df8836-f797-48b9-905f-5790efb2e6af-config\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.526644 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-tls\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.527454 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-etcd-client\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.530503 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-encryption-config\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.532501 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4df8836-f797-48b9-905f-5790efb2e6af-trusted-ca\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.533880 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a79044c-9f1f-4d59-8e63-e138868ebdd2-serving-cert\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.535409 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff561547-8b9b-476c-8bc4-2fe57d56bcc0-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-bvnd7\" (UID: \"ff561547-8b9b-476c-8bc4-2fe57d56bcc0\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.547539 5113 projected.go:289] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.547586 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gckts for pod openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.547695 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1-kube-api-access-gckts podName:3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.047670481 +0000 UTC m=+122.882920308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gckts" (UniqueName: "kubernetes.io/projected/3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1-kube-api-access-gckts") pod "openshift-apiserver-operator-846cbfc458-w4zp6" (UID: "3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.549849 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.587834 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.589199 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2cp6\" (UniqueName: \"kubernetes.io/projected/1c12c797-4410-4399-8ea4-138a53a8ef49-kube-api-access-b2cp6\") pod \"collect-profiles-29425800-ks4pt\" (UID: \"1c12c797-4410-4399-8ea4-138a53a8ef49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.603669 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncrt5\" (UniqueName: \"kubernetes.io/projected/41890204-5a81-44d5-99eb-82be690cc03d-kube-api-access-ncrt5\") pod \"machine-config-operator-67c9d58cbb-jtjg7\" (UID: \"41890204-5a81-44d5-99eb-82be690cc03d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.609472 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.609622 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-etcd-service-ca\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.609670 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac816b90-036c-4024-a177-f6e32b250393-etcd-client\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.609688 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76704edc-840b-442f-8f26-4a5c394e5e4f-config\") pod \"kube-storage-version-migrator-operator-565b79b866-q57sn\" (UID: \"76704edc-840b-442f-8f26-4a5c394e5e4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.609707 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-config\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.609738 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-etcd-ca\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.609774 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76704edc-840b-442f-8f26-4a5c394e5e4f-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-q57sn\" (UID: \"76704edc-840b-442f-8f26-4a5c394e5e4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.609893 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac816b90-036c-4024-a177-f6e32b250393-serving-cert\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.610167 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.110110924 +0000 UTC m=+122.945360751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.610731 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.612560 5113 projected.go:289] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.612587 5113 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.612645 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaa533e2-d1a2-4931-9a85-2584a1d06c96-kube-api-access podName:eaa533e2-d1a2-4931-9a85-2584a1d06c96 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.112628386 +0000 UTC m=+122.947878213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaa533e2-d1a2-4931-9a85-2584a1d06c96-kube-api-access") pod "kube-apiserver-operator-575994946d-79b8w" (UID: "eaa533e2-d1a2-4931-9a85-2584a1d06c96") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.612868 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-etcd-service-ca\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.613739 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac816b90-036c-4024-a177-f6e32b250393-serving-cert\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.616496 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.628181 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.629497 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.631617 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-config\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.633326 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrj8w\" (UniqueName: \"kubernetes.io/projected/a5d19b96-2e13-4b63-9055-7f8c8eb785fb-kube-api-access-mrj8w\") pod \"packageserver-7d4fc7d867-v88kj\" (UID: \"a5d19b96-2e13-4b63-9055-7f8c8eb785fb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.647546 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.654955 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.655974 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac816b90-036c-4024-a177-f6e32b250393-etcd-client\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.664314 5113 projected.go:289] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.667431 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.672847 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76704edc-840b-442f-8f26-4a5c394e5e4f-config\") pod \"kube-storage-version-migrator-operator-565b79b866-q57sn\" (UID: \"76704edc-840b-442f-8f26-4a5c394e5e4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.686181 5113 projected.go:289] Couldn't get configMap openshift-console/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.686194 5113 projected.go:289] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.686224 5113 projected.go:194] Error preparing data for projected volume kube-api-access-m4479 for pod openshift-console/console-64d44f6ddf-zx9gm: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.686238 5113 projected.go:194] Error preparing data for projected volume kube-api-access-m6hjr for pod openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.686314 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bac4af3-97cb-49b6-bd5a-c238ce8aefe0-kube-api-access-m6hjr podName:0bac4af3-97cb-49b6-bd5a-c238ce8aefe0 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.186292565 +0000 UTC m=+123.021542392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m6hjr" (UniqueName: "kubernetes.io/projected/0bac4af3-97cb-49b6-bd5a-c238ce8aefe0-kube-api-access-m6hjr") pod "cluster-samples-operator-6b564684c8-vc9l4" (UID: "0bac4af3-97cb-49b6-bd5a-c238ce8aefe0") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.686330 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-kube-api-access-m4479 podName:edd2186b-b29e-49dd-8b4f-ed2081fac2d4 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.186322906 +0000 UTC m=+123.021572733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m4479" (UniqueName: "kubernetes.io/projected/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-kube-api-access-m4479") pod "console-64d44f6ddf-zx9gm" (UID: "edd2186b-b29e-49dd-8b4f-ed2081fac2d4") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.700337 5113 projected.go:289] Couldn't get configMap openshift-cluster-machine-approver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.700387 5113 projected.go:194] Error preparing data for projected volume kube-api-access-75rdx for pod openshift-cluster-machine-approver/machine-approver-54c688565-d99w4: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.700480 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00752359-7fda-4a4a-bddd-47c8c0939d7f-kube-api-access-75rdx podName:00752359-7fda-4a4a-bddd-47c8c0939d7f nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.200452928 +0000 UTC m=+123.035702755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-75rdx" (UniqueName: "kubernetes.io/projected/00752359-7fda-4a4a-bddd-47c8c0939d7f-kube-api-access-75rdx") pod "machine-approver-54c688565-d99w4" (UID: "00752359-7fda-4a4a-bddd-47c8c0939d7f") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.704721 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvscm\" (UniqueName: \"kubernetes.io/projected/b8a7731c-7bfe-4a87-bfec-8e24fc0ab258-kube-api-access-dvscm\") pod \"package-server-manager-77f986bd66-q2zlc\" (UID: \"b8a7731c-7bfe-4a87-bfec-8e24fc0ab258\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.737179 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.737869 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.23783148 +0000 UTC m=+123.073081307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.744840 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhjxh\" (UniqueName: \"kubernetes.io/projected/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-kube-api-access-rhjxh\") pod \"cni-sysctl-allowlist-ds-rlsj9\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.780635 5113 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.781003 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.790703 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.794234 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmhgv\" (UniqueName: \"kubernetes.io/projected/b75fe240-84aa-4d9f-be64-7f2727566095-kube-api-access-zmhgv\") pod \"csi-hostpathplugin-qdlcg\" (UID: \"b75fe240-84aa-4d9f-be64-7f2727566095\") " pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.796438 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4slng\" (UniqueName: \"kubernetes.io/projected/cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea-kube-api-access-4slng\") pod \"router-default-68cf44c8b8-95skb\" (UID: \"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea\") " pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.800941 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmscr\" (UniqueName: \"kubernetes.io/projected/66b65774-248e-407d-9e9d-c9f770175654-kube-api-access-qmscr\") pod \"multus-admission-controller-69db94689b-4nblp\" (UID: \"66b65774-248e-407d-9e9d-c9f770175654\") " pod="openshift-multus/multus-admission-controller-69db94689b-4nblp" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.802483 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ac816b90-036c-4024-a177-f6e32b250393-etcd-ca\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.842566 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.842722 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.34269262 +0000 UTC m=+123.177942447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.843966 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.844738 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.344722356 +0000 UTC m=+123.179972183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.860376 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2zcr\" (UniqueName: \"kubernetes.io/projected/b1779ada-55ca-4d92-acd1-7a2a9ff46b6f-kube-api-access-r2zcr\") pod \"ingress-canary-brp2v\" (UID: \"b1779ada-55ca-4d92-acd1-7a2a9ff46b6f\") " pod="openshift-ingress-canary/ingress-canary-brp2v" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.860800 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.892534 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctd7j\" (UniqueName: \"kubernetes.io/projected/6dc95aca-746a-4087-9143-d5e3591eb687-kube-api-access-ctd7j\") pod \"control-plane-machine-set-operator-75ffdb6fcd-v56j9\" (UID: \"6dc95aca-746a-4087-9143-d5e3591eb687\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.893877 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v495z\" (UniqueName: \"kubernetes.io/projected/ff561547-8b9b-476c-8bc4-2fe57d56bcc0-kube-api-access-v495z\") pod \"ingress-operator-6b9cb4dbcf-bvnd7\" (UID: \"ff561547-8b9b-476c-8bc4-2fe57d56bcc0\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.894239 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.902471 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.903522 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzh66\" (UniqueName: \"kubernetes.io/projected/74f90e1e-39f6-4433-9dd0-82e74f7b2b0f-kube-api-access-fzh66\") pod \"dns-default-rlhjs\" (UID: \"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f\") " pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.912468 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.930850 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zjdv\" (UniqueName: \"kubernetes.io/projected/33e71d64-89ca-44fa-9941-337b91d25c4f-kube-api-access-7zjdv\") pod \"machine-config-server-5cr2b\" (UID: \"33e71d64-89ca-44fa-9941-337b91d25c4f\") " pod="openshift-machine-config-operator/machine-config-server-5cr2b" Dec 12 14:12:19 crc kubenswrapper[5113]: W1212 14:12:19.931669 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbad4b17_cb64_4d28_8fc5_6c1e002cb1ea.slice/crio-ef37fa77f78f393e5bbe671e85cd1e86b90e098f25e9558b8c5fb6c1618a0580 WatchSource:0}: Error finding container ef37fa77f78f393e5bbe671e85cd1e86b90e098f25e9558b8c5fb6c1618a0580: Status 404 returned error can't find the container with id ef37fa77f78f393e5bbe671e85cd1e86b90e098f25e9558b8c5fb6c1618a0580 Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.945409 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:19 crc kubenswrapper[5113]: E1212 14:12:19.946005 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.445983877 +0000 UTC m=+123.281233704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.946092 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.953220 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5cr2b" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.962973 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlwnq\" (UniqueName: \"kubernetes.io/projected/5d332a03-7aae-43d2-b644-bdffb3c9b992-kube-api-access-mlwnq\") pod \"service-ca-74545575db-zqpbf\" (UID: \"5d332a03-7aae-43d2-b644-bdffb3c9b992\") " pod="openshift-service-ca/service-ca-74545575db-zqpbf" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.963575 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-4nblp" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.969861 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-brp2v" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.983820 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh5ws\" (UniqueName: \"kubernetes.io/projected/3d4e952f-6916-451e-aa61-2975f38fa7f4-kube-api-access-kh5ws\") pod \"olm-operator-5cdf44d969-7skgn\" (UID: \"3d4e952f-6916-451e-aa61-2975f38fa7f4\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.987875 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.990975 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-899rn\" (UniqueName: \"kubernetes.io/projected/de734fb8-e662-4172-b64e-57bb9b51c606-kube-api-access-899rn\") pod \"service-ca-operator-5b9c976747-4gtx7\" (UID: \"de734fb8-e662-4172-b64e-57bb9b51c606\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" Dec 12 14:12:19 crc kubenswrapper[5113]: I1212 14:12:19.992050 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.008952 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.013113 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.022880 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76704edc-840b-442f-8f26-4a5c394e5e4f-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-q57sn\" (UID: \"76704edc-840b-442f-8f26-4a5c394e5e4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.031571 5113 projected.go:289] Couldn't get configMap openshift-console/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.031602 5113 projected.go:194] Error preparing data for projected volume kube-api-access-jnrb4 for pod openshift-console/downloads-747b44746d-wnqfw: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.031681 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4723fa2f-a114-4d27-875f-951678d39dde-kube-api-access-jnrb4 podName:4723fa2f-a114-4d27-875f-951678d39dde nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.53165794 +0000 UTC m=+123.366907777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jnrb4" (UniqueName: "kubernetes.io/projected/4723fa2f-a114-4d27-875f-951678d39dde-kube-api-access-jnrb4") pod "downloads-747b44746d-wnqfw" (UID: "4723fa2f-a114-4d27-875f-951678d39dde") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.031789 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.050895 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.051735 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gckts\" (UniqueName: \"kubernetes.io/projected/3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1-kube-api-access-gckts\") pod \"openshift-apiserver-operator-846cbfc458-w4zp6\" (UID: \"3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.051805 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.052460 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.552441139 +0000 UTC m=+123.387690966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.063883 5113 request.go:752] "Waited before sending request" delay="1.452292173s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=36837" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.066101 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.067667 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gckts\" (UniqueName: \"kubernetes.io/projected/3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1-kube-api-access-gckts\") pod \"openshift-apiserver-operator-846cbfc458-w4zp6\" (UID: \"3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.081412 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-2qnd9"] Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.129508 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-zqpbf" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.132320 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.132603 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.132872 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.143633 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.146682 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.152916 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z657h\" (UniqueName: \"kubernetes.io/projected/35aac665-ec48-4b0f-8e0a-b5b8b173ca1e-kube-api-access-z657h\") pod \"machine-config-controller-f9cdd68f7-sz7rz\" (UID: \"35aac665-ec48-4b0f-8e0a-b5b8b173ca1e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.153960 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.154161 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.654113934 +0000 UTC m=+123.489363781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.154556 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaa533e2-d1a2-4931-9a85-2584a1d06c96-kube-api-access\") pod \"kube-apiserver-operator-575994946d-79b8w\" (UID: \"eaa533e2-d1a2-4931-9a85-2584a1d06c96\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.155348 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.155949 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.655905432 +0000 UTC m=+123.491155259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.172114 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaa533e2-d1a2-4931-9a85-2584a1d06c96-kube-api-access\") pod \"kube-apiserver-operator-575994946d-79b8w\" (UID: \"eaa533e2-d1a2-4931-9a85-2584a1d06c96\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.174884 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.206578 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.259051 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.259293 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m4479\" (UniqueName: \"kubernetes.io/projected/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-kube-api-access-m4479\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.259405 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m6hjr\" (UniqueName: \"kubernetes.io/projected/0bac4af3-97cb-49b6-bd5a-c238ce8aefe0-kube-api-access-m6hjr\") pod \"cluster-samples-operator-6b564684c8-vc9l4\" (UID: \"0bac4af3-97cb-49b6-bd5a-c238ce8aefe0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.259474 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-75rdx\" (UniqueName: \"kubernetes.io/projected/00752359-7fda-4a4a-bddd-47c8c0939d7f-kube-api-access-75rdx\") pod \"machine-approver-54c688565-d99w4\" (UID: \"00752359-7fda-4a4a-bddd-47c8c0939d7f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.259588 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.759573213 +0000 UTC m=+123.594823040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.260891 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.261112 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.265429 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.268750 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4479\" (UniqueName: \"kubernetes.io/projected/edd2186b-b29e-49dd-8b4f-ed2081fac2d4-kube-api-access-m4479\") pod \"console-64d44f6ddf-zx9gm\" (UID: \"edd2186b-b29e-49dd-8b4f-ed2081fac2d4\") " pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:20 crc kubenswrapper[5113]: W1212 14:12:20.271610 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5311c643_bfa2_4959_bc65_a6e4e4f5cd22.slice/crio-461f76e8884b324562582d1cc3f70afb624cea0bb057e8bbdf4ffe049e596bcd WatchSource:0}: Error finding container 461f76e8884b324562582d1cc3f70afb624cea0bb057e8bbdf4ffe049e596bcd: Status 404 returned error can't find the container with id 461f76e8884b324562582d1cc3f70afb624cea0bb057e8bbdf4ffe049e596bcd Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.271964 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.272239 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6hjr\" (UniqueName: \"kubernetes.io/projected/0bac4af3-97cb-49b6-bd5a-c238ce8aefe0-kube-api-access-m6hjr\") pod \"cluster-samples-operator-6b564684c8-vc9l4\" (UID: \"0bac4af3-97cb-49b6-bd5a-c238ce8aefe0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.273696 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.275823 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-75rdx\" (UniqueName: \"kubernetes.io/projected/00752359-7fda-4a4a-bddd-47c8c0939d7f-kube-api-access-75rdx\") pod \"machine-approver-54c688565-d99w4\" (UID: \"00752359-7fda-4a4a-bddd-47c8c0939d7f\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.283920 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x"] Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.286198 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.294740 5113 projected.go:194] Error preparing data for projected volume kube-api-access-dv2mk for pod openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4: failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.294869 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c087108f-8a95-41df-be4f-7f61b16f4b74-kube-api-access-dv2mk podName:c087108f-8a95-41df-be4f-7f61b16f4b74 nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.794840957 +0000 UTC m=+123.630090784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dv2mk" (UniqueName: "kubernetes.io/projected/c087108f-8a95-41df-be4f-7f61b16f4b74-kube-api-access-dv2mk") pod "openshift-controller-manager-operator-686468bdd5-7brq4" (UID: "c087108f-8a95-41df-be4f-7f61b16f4b74") : failed to sync configmap cache: timed out waiting for the condition Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.305948 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: W1212 14:12:20.308878 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe03d25b_e298_4d1b_8cf2_e62d9d80f1a7.slice/crio-5f1d6cc8810956e3c33321ff5de89732427c4f6fa87bb72014e3c8f71828d590 WatchSource:0}: Error finding container 5f1d6cc8810956e3c33321ff5de89732427c4f6fa87bb72014e3c8f71828d590: Status 404 returned error can't find the container with id 5f1d6cc8810956e3c33321ff5de89732427c4f6fa87bb72014e3c8f71828d590 Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.333676 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.345449 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.358393 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.358920 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zll2b\" (UniqueName: \"kubernetes.io/projected/1db49e24-69a6-47c8-b689-df0b1754efac-kube-api-access-zll2b\") pod \"dns-operator-799b87ffcd-cmps6\" (UID: \"1db49e24-69a6-47c8-b689-df0b1754efac\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.366012 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.366488 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.866473869 +0000 UTC m=+123.701723696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.372313 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.374107 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.448619 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.451369 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.451727 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.457403 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn6pz\" (UniqueName: \"kubernetes.io/projected/76704edc-840b-442f-8f26-4a5c394e5e4f-kube-api-access-hn6pz\") pod \"kube-storage-version-migrator-operator-565b79b866-q57sn\" (UID: \"76704edc-840b-442f-8f26-4a5c394e5e4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.465603 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.472515 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4rmj\" (UniqueName: \"kubernetes.io/projected/ac816b90-036c-4024-a177-f6e32b250393-kube-api-access-w4rmj\") pod \"etcd-operator-69b85846b6-sntfz\" (UID: \"ac816b90-036c-4024-a177-f6e32b250393\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.474301 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.474690 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:20.974672897 +0000 UTC m=+123.809922724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.475834 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl8dl\" (UniqueName: \"kubernetes.io/projected/2a79044c-9f1f-4d59-8e63-e138868ebdd2-kube-api-access-wl8dl\") pod \"apiserver-8596bd845d-jqdlj\" (UID: \"2a79044c-9f1f-4d59-8e63-e138868ebdd2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.485348 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qb8t\" (UniqueName: \"kubernetes.io/projected/d4df8836-f797-48b9-905f-5790efb2e6af-kube-api-access-6qb8t\") pod \"console-operator-67c89758df-f4974\" (UID: \"d4df8836-f797-48b9-905f-5790efb2e6af\") " pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.505954 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.507002 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5cr2b" event={"ID":"33e71d64-89ca-44fa-9941-337b91d25c4f","Type":"ContainerStarted","Data":"acaa51588d0c178a1a263cf747f19cf1134802bf5516e2ee6dfc9217dbc98622"} Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.508505 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.515281 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.522633 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zcw5\" (UniqueName: \"kubernetes.io/projected/7cedf898-adef-4e24-9d76-b6a19006883c-kube-api-access-2zcw5\") pod \"migrator-866fcbc849-n69j2\" (UID: \"7cedf898-adef-4e24-9d76-b6a19006883c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-n69j2" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.522964 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" event={"ID":"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7","Type":"ContainerStarted","Data":"5f1d6cc8810956e3c33321ff5de89732427c4f6fa87bb72014e3c8f71828d590"} Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.534151 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.543256 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.549666 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.550372 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" event={"ID":"ebccfef3-840e-48d6-9da8-c61502a955fa","Type":"ContainerStarted","Data":"adda92b6f91ed2bb5633f43c26c9b9f3bf70d4106595a1b623fcb8c6b7559c22"} Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.556613 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.560773 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" event={"ID":"5311c643-bfa2-4959-bc65-a6e4e4f5cd22","Type":"ContainerStarted","Data":"461f76e8884b324562582d1cc3f70afb624cea0bb057e8bbdf4ffe049e596bcd"} Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.586145 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jnrb4\" (UniqueName: \"kubernetes.io/projected/4723fa2f-a114-4d27-875f-951678d39dde-kube-api-access-jnrb4\") pod \"downloads-747b44746d-wnqfw\" (UID: \"4723fa2f-a114-4d27-875f-951678d39dde\") " pod="openshift-console/downloads-747b44746d-wnqfw" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.586214 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.586673 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.086655429 +0000 UTC m=+123.921905256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.589470 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-95skb" event={"ID":"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea","Type":"ContainerStarted","Data":"ef37fa77f78f393e5bbe671e85cd1e86b90e098f25e9558b8c5fb6c1618a0580"} Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.598040 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.598430 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.601488 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnrb4\" (UniqueName: \"kubernetes.io/projected/4723fa2f-a114-4d27-875f-951678d39dde-kube-api-access-jnrb4\") pod \"downloads-747b44746d-wnqfw\" (UID: \"4723fa2f-a114-4d27-875f-951678d39dde\") " pod="openshift-console/downloads-747b44746d-wnqfw" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.633010 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7"] Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.633237 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.633498 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.636280 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.640417 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.680660 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.682061 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64"] Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.725555 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-wnqfw" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.726667 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.727394 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.227376311 +0000 UTC m=+124.062626128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.728601 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.730184 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.731395 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.733468 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.23344824 +0000 UTC m=+124.068698117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.739667 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.739723 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.761164 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.767418 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" podStartSLOduration=103.7674027 podStartE2EDuration="1m43.7674027s" podCreationTimestamp="2025-12-12 14:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:20.763897805 +0000 UTC m=+123.599147642" watchObservedRunningTime="2025-12-12 14:12:20.7674027 +0000 UTC m=+123.602652527" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.767984 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-n69j2" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.855677 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.860913 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.360850666 +0000 UTC m=+124.196100503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.868073 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv2mk\" (UniqueName: \"kubernetes.io/projected/c087108f-8a95-41df-be4f-7f61b16f4b74-kube-api-access-dv2mk\") pod \"openshift-controller-manager-operator-686468bdd5-7brq4\" (UID: \"c087108f-8a95-41df-be4f-7f61b16f4b74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.856106 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dv2mk\" (UniqueName: \"kubernetes.io/projected/c087108f-8a95-41df-be4f-7f61b16f4b74-kube-api-access-dv2mk\") pod \"openshift-controller-manager-operator-686468bdd5-7brq4\" (UID: \"c087108f-8a95-41df-be4f-7f61b16f4b74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.877965 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.879100 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.379077822 +0000 UTC m=+124.214327649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.908496 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9"] Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.910720 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x"] Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.965189 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-lh95k" podStartSLOduration=102.965169927 podStartE2EDuration="1m42.965169927s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:20.963543885 +0000 UTC m=+123.798793722" watchObservedRunningTime="2025-12-12 14:12:20.965169927 +0000 UTC m=+123.800419764" Dec 12 14:12:20 crc kubenswrapper[5113]: I1212 14:12:20.979690 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:20 crc kubenswrapper[5113]: E1212 14:12:20.980419 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.480397416 +0000 UTC m=+124.315647243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.082531 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:21 crc kubenswrapper[5113]: E1212 14:12:21.083093 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.583074353 +0000 UTC m=+124.418324180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.151293 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.159243 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.185012 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:21 crc kubenswrapper[5113]: E1212 14:12:21.188698 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.688671666 +0000 UTC m=+124.523921493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:21 crc kubenswrapper[5113]: W1212 14:12:21.273961 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d3eb42d_abaa_44da_9509_6c37f51f2cd9.slice/crio-e6ebb65b5f0fcac685b4c13a6c679f78da4a2fb60efadfa91f0d1fe341f295fe WatchSource:0}: Error finding container e6ebb65b5f0fcac685b4c13a6c679f78da4a2fb60efadfa91f0d1fe341f295fe: Status 404 returned error can't find the container with id e6ebb65b5f0fcac685b4c13a6c679f78da4a2fb60efadfa91f0d1fe341f295fe Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.301433 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.301484 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.301527 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:21 crc kubenswrapper[5113]: E1212 14:12:21.301864 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.801851808 +0000 UTC m=+124.637101635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.307019 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.307362 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.315276 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.348325 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.398422 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-49qb7" podStartSLOduration=104.398401585 podStartE2EDuration="1m44.398401585s" podCreationTimestamp="2025-12-12 14:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:21.347571652 +0000 UTC m=+124.182821489" watchObservedRunningTime="2025-12-12 14:12:21.398401585 +0000 UTC m=+124.233651412" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.405986 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.406517 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.406607 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:21 crc kubenswrapper[5113]: E1212 14:12:21.407222 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:21.907194703 +0000 UTC m=+124.742444540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.409422 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.430733 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.555045 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.555972 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.556349 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.557097 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:21 crc kubenswrapper[5113]: E1212 14:12:21.557448 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:22.057434787 +0000 UTC m=+124.892684624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.578611 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=32.578596278 podStartE2EDuration="32.578596278s" podCreationTimestamp="2025-12-12 14:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:21.578590548 +0000 UTC m=+124.413840395" watchObservedRunningTime="2025-12-12 14:12:21.578596278 +0000 UTC m=+124.413846105" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.657746 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:21 crc kubenswrapper[5113]: E1212 14:12:21.658076 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:22.158056217 +0000 UTC m=+124.993306044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.664837 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.667101 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.687277 5113 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-2qnd9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.687717 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" podUID="fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.759998 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:21 crc kubenswrapper[5113]: E1212 14:12:21.760358 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:22.260344102 +0000 UTC m=+125.095593929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.795579 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.810980 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.861570 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:21 crc kubenswrapper[5113]: E1212 14:12:21.862750 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:22.36272975 +0000 UTC m=+125.197979587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924388 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924433 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924491 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924507 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-95skb" event={"ID":"cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea","Type":"ContainerStarted","Data":"cee3afb5305fe2a4768c3ff412de743ce91c6106f6cc60a89563f6942229aa5b"} Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924546 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" event={"ID":"00752359-7fda-4a4a-bddd-47c8c0939d7f","Type":"ContainerStarted","Data":"8c937136da5b25af8b5983de7f7f89e903c6a0342a0ebe4a26acd2ee9a6bbd6b"} Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924564 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9" event={"ID":"6dc95aca-746a-4087-9143-d5e3591eb687","Type":"ContainerStarted","Data":"616f73790d348bc4175931ccc3a9a49d32037478d0621bc07231eb5af0d8fe5c"} Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924576 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" event={"ID":"9d3eb42d-abaa-44da-9509-6c37f51f2cd9","Type":"ContainerStarted","Data":"e6ebb65b5f0fcac685b4c13a6c679f78da4a2fb60efadfa91f0d1fe341f295fe"} Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924608 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4nblp"] Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924629 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf"] Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924641 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt"] Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924651 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj"] Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924671 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" event={"ID":"3d70d7f9-c575-48ab-ad97-52b038fbe2c6","Type":"ContainerStarted","Data":"b4eaa8faa63b37ade0e2378ff38cb0610bca397f56617da6033ef9cf4cee61ff"} Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924683 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" event={"ID":"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7","Type":"ContainerStarted","Data":"8c24388c5d6967b229870fcddace94d2e822b8ed420985fe6700b7a4273ef32a"} Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924697 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5cr2b" event={"ID":"33e71d64-89ca-44fa-9941-337b91d25c4f","Type":"ContainerStarted","Data":"ce5353bbd9c4c6b00e60d810a7e2a04202cb4af387a2312caa3d500860401d76"} Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924710 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" event={"ID":"41890204-5a81-44d5-99eb-82be690cc03d","Type":"ContainerStarted","Data":"66c0fea5be1325c2cc5c23491abe75dce176d17eda55c1b6c3e509b063d35c30"} Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.924723 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" event={"ID":"467a9df6-b909-453f-8a3a-8fec3fd4b54f","Type":"ContainerStarted","Data":"64ed8a5a0bd34c60f99de7dcc6713755511285d875877fded764b89adcd07258"} Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.964750 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:21 crc kubenswrapper[5113]: E1212 14:12:21.965299 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:22.465281564 +0000 UTC m=+125.300531391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.966536 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" podStartSLOduration=103.966512164 podStartE2EDuration="1m43.966512164s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:21.963841406 +0000 UTC m=+124.799091243" watchObservedRunningTime="2025-12-12 14:12:21.966512164 +0000 UTC m=+124.801761991" Dec 12 14:12:21 crc kubenswrapper[5113]: I1212 14:12:21.998046 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-xfscs" podStartSLOduration=103.998020204 podStartE2EDuration="1m43.998020204s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:21.985316379 +0000 UTC m=+124.820566216" watchObservedRunningTime="2025-12-12 14:12:21.998020204 +0000 UTC m=+124.833270031" Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.066524 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.067196 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs\") pod \"network-metrics-daemon-jvcjp\" (UID: \"357b225a-0c71-40ba-ac24-d769a9ff3f07\") " pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:12:22 crc kubenswrapper[5113]: E1212 14:12:22.067446 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:22.567428844 +0000 UTC m=+125.402678671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:22 crc kubenswrapper[5113]: W1212 14:12:22.085986 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66b65774_248e_407d_9e9d_c9f770175654.slice/crio-990cc97e4f6951f2baf49f9c31a2c02669444b572e8f2b32f508982c0341054d WatchSource:0}: Error finding container 990cc97e4f6951f2baf49f9c31a2c02669444b572e8f2b32f508982c0341054d: Status 404 returned error can't find the container with id 990cc97e4f6951f2baf49f9c31a2c02669444b572e8f2b32f508982c0341054d Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.088775 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.122443 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357b225a-0c71-40ba-ac24-d769a9ff3f07-metrics-certs\") pod \"network-metrics-daemon-jvcjp\" (UID: \"357b225a-0c71-40ba-ac24-d769a9ff3f07\") " pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.168425 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:22 crc kubenswrapper[5113]: E1212 14:12:22.168918 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:22.668904443 +0000 UTC m=+125.504154270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.194077 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-95skb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:22 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 12 14:12:22 crc kubenswrapper[5113]: [+]process-running ok Dec 12 14:12:22 crc kubenswrapper[5113]: healthz check failed Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.195422 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-95skb" podUID="cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.270639 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:22 crc kubenswrapper[5113]: E1212 14:12:22.271034 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:22.771005282 +0000 UTC m=+125.606255109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.271194 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:22 crc kubenswrapper[5113]: E1212 14:12:22.271594 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:22.77158529 +0000 UTC m=+125.606835117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.371191 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" podStartSLOduration=104.371171438 podStartE2EDuration="1m44.371171438s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:22.342233971 +0000 UTC m=+125.177483818" watchObservedRunningTime="2025-12-12 14:12:22.371171438 +0000 UTC m=+125.206421265" Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.373226 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:22 crc kubenswrapper[5113]: E1212 14:12:22.373556 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:22.873525204 +0000 UTC m=+125.708775031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.406672 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.412788 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jvcjp" Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.425301 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" podStartSLOduration=104.425280588 podStartE2EDuration="1m44.425280588s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:22.372354566 +0000 UTC m=+125.207604413" watchObservedRunningTime="2025-12-12 14:12:22.425280588 +0000 UTC m=+125.260530425" Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.460913 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-95skb" podStartSLOduration=104.460894282 podStartE2EDuration="1m44.460894282s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:22.42933293 +0000 UTC m=+125.264582787" watchObservedRunningTime="2025-12-12 14:12:22.460894282 +0000 UTC m=+125.296144119" Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.474459 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:22 crc kubenswrapper[5113]: E1212 14:12:22.474854 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:22.974836998 +0000 UTC m=+125.810086825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.502540 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" podStartSLOduration=104.502520293 podStartE2EDuration="1m44.502520293s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:22.49936533 +0000 UTC m=+125.334615167" watchObservedRunningTime="2025-12-12 14:12:22.502520293 +0000 UTC m=+125.337770120" Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.502691 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5cr2b" podStartSLOduration=8.502685169 podStartE2EDuration="8.502685169s" podCreationTimestamp="2025-12-12 14:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:22.463473146 +0000 UTC m=+125.298722993" watchObservedRunningTime="2025-12-12 14:12:22.502685169 +0000 UTC m=+125.337934996" Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.576843 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:22 crc kubenswrapper[5113]: E1212 14:12:22.577006 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.076981438 +0000 UTC m=+125.912231265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.577200 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:22 crc kubenswrapper[5113]: E1212 14:12:22.577596 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.077586428 +0000 UTC m=+125.912836255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.603536 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7"] Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.678099 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:22 crc kubenswrapper[5113]: E1212 14:12:22.678383 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.178362154 +0000 UTC m=+126.013611981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.711879 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" event={"ID":"5311c643-bfa2-4959-bc65-a6e4e4f5cd22","Type":"ContainerStarted","Data":"75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915"} Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.712742 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" event={"ID":"ff561547-8b9b-476c-8bc4-2fe57d56bcc0","Type":"ContainerStarted","Data":"f98dbb0167a8fe0688144e71c9c51f01a7e80cac537b9fe50b42ad64a08e331a"} Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.713723 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" event={"ID":"1c12c797-4410-4399-8ea4-138a53a8ef49","Type":"ContainerStarted","Data":"c471f1a49b641b5ec5d84544096dfa838fe8f389c5aca1b12f4d5b1175b7d3e1"} Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.714593 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" event={"ID":"ee6f605d-f530-448b-825d-cd7dedd4c632","Type":"ContainerStarted","Data":"be5c7c74e47aebd2e22c4ebaead251211fe56bf1780837fd9d90beb5516a6e42"} Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.715346 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4nblp" event={"ID":"66b65774-248e-407d-9e9d-c9f770175654","Type":"ContainerStarted","Data":"990cc97e4f6951f2baf49f9c31a2c02669444b572e8f2b32f508982c0341054d"} Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.716250 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" event={"ID":"a5d19b96-2e13-4b63-9055-7f8c8eb785fb","Type":"ContainerStarted","Data":"24d7224989705af680244984b488ad58098ae6fc292dfd8225f65b14d751c412"} Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.780032 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:22 crc kubenswrapper[5113]: E1212 14:12:22.780558 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.280539745 +0000 UTC m=+126.115789572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.811474 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-brp2v"] Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.825397 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w"] Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.829553 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc"] Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.834628 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz"] Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.835499 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-zx9gm"] Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.844336 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7"] Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.847477 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn"] Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.862078 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-n69j2"] Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.862806 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn"] Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.867586 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-zqpbf"] Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.869715 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6"] Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.872664 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj"] Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.881712 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:22 crc kubenswrapper[5113]: E1212 14:12:22.881959 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.38193011 +0000 UTC m=+126.217179947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.882410 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:22 crc kubenswrapper[5113]: E1212 14:12:22.883937 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.383923556 +0000 UTC m=+126.219173383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.899292 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-95skb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:22 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 12 14:12:22 crc kubenswrapper[5113]: [+]process-running ok Dec 12 14:12:22 crc kubenswrapper[5113]: healthz check failed Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.899424 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-95skb" podUID="cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:22 crc kubenswrapper[5113]: I1212 14:12:22.983832 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:22 crc kubenswrapper[5113]: E1212 14:12:22.984287 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.484265068 +0000 UTC m=+126.319514895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.005190 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-wnqfw"] Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.020479 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rlhjs"] Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.028931 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qdlcg"] Dec 12 14:12:23 crc kubenswrapper[5113]: W1212 14:12:23.038643 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3786e9cd_3b8f_45ca_a319_5bc9aa14fdd1.slice/crio-290b6f18ea381ad8142cb3874afcf65e2b6d91e9f69a86a15b0d36e63fe23950 WatchSource:0}: Error finding container 290b6f18ea381ad8142cb3874afcf65e2b6d91e9f69a86a15b0d36e63fe23950: Status 404 returned error can't find the container with id 290b6f18ea381ad8142cb3874afcf65e2b6d91e9f69a86a15b0d36e63fe23950 Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.052283 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jvcjp"] Dec 12 14:12:23 crc kubenswrapper[5113]: W1212 14:12:23.053848 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a79044c_9f1f_4d59_8e63_e138868ebdd2.slice/crio-1fdcbbb87097de65eba5cb9982223d459c120e03bb671c16ec897d788597fb1d WatchSource:0}: Error finding container 1fdcbbb87097de65eba5cb9982223d459c120e03bb671c16ec897d788597fb1d: Status 404 returned error can't find the container with id 1fdcbbb87097de65eba5cb9982223d459c120e03bb671c16ec897d788597fb1d Dec 12 14:12:23 crc kubenswrapper[5113]: W1212 14:12:23.061440 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod357b225a_0c71_40ba_ac24_d769a9ff3f07.slice/crio-bbfb49e9dc02c123022560aba3da171c80c52bf4c5dd9786ad71fc2c2e7700ce WatchSource:0}: Error finding container bbfb49e9dc02c123022560aba3da171c80c52bf4c5dd9786ad71fc2c2e7700ce: Status 404 returned error can't find the container with id bbfb49e9dc02c123022560aba3da171c80c52bf4c5dd9786ad71fc2c2e7700ce Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.085965 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:23 crc kubenswrapper[5113]: E1212 14:12:23.086537 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.586518471 +0000 UTC m=+126.421768298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.137899 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-f4974"] Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.147148 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-cmps6"] Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.149030 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-sntfz"] Dec 12 14:12:23 crc kubenswrapper[5113]: W1212 14:12:23.150870 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-943a06ffd82decd76aaab1fed0b2b1706b39dd17635b7ccd8d3ee285dd8ce8cc WatchSource:0}: Error finding container 943a06ffd82decd76aaab1fed0b2b1706b39dd17635b7ccd8d3ee285dd8ce8cc: Status 404 returned error can't find the container with id 943a06ffd82decd76aaab1fed0b2b1706b39dd17635b7ccd8d3ee285dd8ce8cc Dec 12 14:12:23 crc kubenswrapper[5113]: W1212 14:12:23.156828 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac816b90_036c_4024_a177_f6e32b250393.slice/crio-005dbc07ef72c86a7dc82926e645df9454e2063ddc8010dafc0b444f14775f64 WatchSource:0}: Error finding container 005dbc07ef72c86a7dc82926e645df9454e2063ddc8010dafc0b444f14775f64: Status 404 returned error can't find the container with id 005dbc07ef72c86a7dc82926e645df9454e2063ddc8010dafc0b444f14775f64 Dec 12 14:12:23 crc kubenswrapper[5113]: W1212 14:12:23.158196 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-c313920043fecf8d1ff10344bc1a8351fb1c0002e9eac237ca210276dd6a217e WatchSource:0}: Error finding container c313920043fecf8d1ff10344bc1a8351fb1c0002e9eac237ca210276dd6a217e: Status 404 returned error can't find the container with id c313920043fecf8d1ff10344bc1a8351fb1c0002e9eac237ca210276dd6a217e Dec 12 14:12:23 crc kubenswrapper[5113]: W1212 14:12:23.160100 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1db49e24_69a6_47c8_b689_df0b1754efac.slice/crio-06e8ce2c68d4a22f94d17790a626c9f8d022660edd0776d8b32cdc3bebaa04ba WatchSource:0}: Error finding container 06e8ce2c68d4a22f94d17790a626c9f8d022660edd0776d8b32cdc3bebaa04ba: Status 404 returned error can't find the container with id 06e8ce2c68d4a22f94d17790a626c9f8d022660edd0776d8b32cdc3bebaa04ba Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.188719 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:23 crc kubenswrapper[5113]: E1212 14:12:23.189023 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.688971362 +0000 UTC m=+126.524221189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.191400 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4"] Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.229142 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4"] Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.289693 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:23 crc kubenswrapper[5113]: E1212 14:12:23.290078 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.790062998 +0000 UTC m=+126.625312825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.390667 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:23 crc kubenswrapper[5113]: E1212 14:12:23.390872 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.890844074 +0000 UTC m=+126.726093901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.391401 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:23 crc kubenswrapper[5113]: E1212 14:12:23.391750 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.891736563 +0000 UTC m=+126.726986390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.492331 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:23 crc kubenswrapper[5113]: E1212 14:12:23.492506 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.992479807 +0000 UTC m=+126.827729634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.492838 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:23 crc kubenswrapper[5113]: E1212 14:12:23.493272 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:23.993257523 +0000 UTC m=+126.828507340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.550017 5113 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-2qnd9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.550075 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" podUID="fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.576155 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" podStartSLOduration=9.576104342 podStartE2EDuration="9.576104342s" podCreationTimestamp="2025-12-12 14:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:23.575322807 +0000 UTC m=+126.410572654" watchObservedRunningTime="2025-12-12 14:12:23.576104342 +0000 UTC m=+126.411354169" Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.596772 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:23 crc kubenswrapper[5113]: E1212 14:12:23.597966 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:24.097947817 +0000 UTC m=+126.933197644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.698302 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:23 crc kubenswrapper[5113]: E1212 14:12:23.698804 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:24.198784284 +0000 UTC m=+127.034034111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.720590 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" event={"ID":"b75fe240-84aa-4d9f-be64-7f2727566095","Type":"ContainerStarted","Data":"8696f21be1bbd1eb77cc96dda053f6d39d5d3a7c6dc843cad7cad392e24ad355"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.722254 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" event={"ID":"3d70d7f9-c575-48ab-ad97-52b038fbe2c6","Type":"ContainerStarted","Data":"bd93f0c8b808ceb1a56c34f42e688fde2de201b7118c59f8100fbcc4829bf23f"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.722924 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-n69j2" event={"ID":"7cedf898-adef-4e24-9d76-b6a19006883c","Type":"ContainerStarted","Data":"4413260c52d3beb1c1ea5a54d4065816568c0541d1b4a6ad38fa00339c071ed5"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.725363 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"943a06ffd82decd76aaab1fed0b2b1706b39dd17635b7ccd8d3ee285dd8ce8cc"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.726453 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"c313920043fecf8d1ff10344bc1a8351fb1c0002e9eac237ca210276dd6a217e"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.729051 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" event={"ID":"41890204-5a81-44d5-99eb-82be690cc03d","Type":"ContainerStarted","Data":"d076ac6f75f959144a6e590271480bc9a91a6233feada71e3dddd03bf4da4374"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.734711 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rlhjs" event={"ID":"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f","Type":"ContainerStarted","Data":"d65f290ceecc78680e05fd0ba12c2227149975f8b7a2508ac61da67f2304c87e"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.735653 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-brp2v" event={"ID":"b1779ada-55ca-4d92-acd1-7a2a9ff46b6f","Type":"ContainerStarted","Data":"82c952e53f2a2ef240d335430e5d9de8e29dbbee18af784d33bf1b311b4bcb61"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.736454 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" event={"ID":"3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1","Type":"ContainerStarted","Data":"290b6f18ea381ad8142cb3874afcf65e2b6d91e9f69a86a15b0d36e63fe23950"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.737177 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" event={"ID":"b8a7731c-7bfe-4a87-bfec-8e24fc0ab258","Type":"ContainerStarted","Data":"2ec17ef8f927a74c74820ee7a20400c0b4f873bb761b6b34da01b195a5988a3c"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.737886 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" event={"ID":"2a79044c-9f1f-4d59-8e63-e138868ebdd2","Type":"ContainerStarted","Data":"1fdcbbb87097de65eba5cb9982223d459c120e03bb671c16ec897d788597fb1d"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.739036 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" event={"ID":"00752359-7fda-4a4a-bddd-47c8c0939d7f","Type":"ContainerStarted","Data":"2bb096131f77efe1cf65e948d6680492b57d5b18ed749987acbd4add227120cb"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.739691 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"4f0159f230a041e5be1f3155013033a0c0fc6274c823c35970a88adbadb089d7"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.740442 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" event={"ID":"3d4e952f-6916-451e-aa61-2975f38fa7f4","Type":"ContainerStarted","Data":"aff7d57b2912a4080ed305c1ab1e960f82d2f87e2bdf6ec8c60e8adfb58da0e4"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.742066 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" event={"ID":"ac816b90-036c-4024-a177-f6e32b250393","Type":"ContainerStarted","Data":"005dbc07ef72c86a7dc82926e645df9454e2063ddc8010dafc0b444f14775f64"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.742739 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jvcjp" event={"ID":"357b225a-0c71-40ba-ac24-d769a9ff3f07","Type":"ContainerStarted","Data":"bbfb49e9dc02c123022560aba3da171c80c52bf4c5dd9786ad71fc2c2e7700ce"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.743415 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" event={"ID":"c087108f-8a95-41df-be4f-7f61b16f4b74","Type":"ContainerStarted","Data":"7b8c5016c0b6e07277571248308c7ccdc62b6b3689fc2e5b732458642401719e"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.744002 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-wnqfw" event={"ID":"4723fa2f-a114-4d27-875f-951678d39dde","Type":"ContainerStarted","Data":"2cee6fa783b3e5c92e7491c9374448455e8abe5899496bd2ccef1856f07464b9"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.744704 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" event={"ID":"35aac665-ec48-4b0f-8e0a-b5b8b173ca1e","Type":"ContainerStarted","Data":"99305b5589500f3d8fe2d1257a20890028c461cdb51308506076256cd6abe1e0"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.745493 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-zx9gm" event={"ID":"edd2186b-b29e-49dd-8b4f-ed2081fac2d4","Type":"ContainerStarted","Data":"518391e178e42df4b4835b0e2d867c5791fa7a6c07bd2f509a848ef8970952ef"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.746100 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" event={"ID":"76704edc-840b-442f-8f26-4a5c394e5e4f","Type":"ContainerStarted","Data":"2a8b97fe7dc2f7290d812358b0d39c36689201f890fa75d068c77c6a2aedb72a"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.746801 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" event={"ID":"1db49e24-69a6-47c8-b689-df0b1754efac","Type":"ContainerStarted","Data":"06e8ce2c68d4a22f94d17790a626c9f8d022660edd0776d8b32cdc3bebaa04ba"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.747491 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" event={"ID":"de734fb8-e662-4172-b64e-57bb9b51c606","Type":"ContainerStarted","Data":"6795083a331d24e3e33d770773731c64e8a16f9c7397b6012b87afbbb21588b3"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.749307 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9" event={"ID":"6dc95aca-746a-4087-9143-d5e3591eb687","Type":"ContainerStarted","Data":"1917f71c2e1604b2320f8f6726569a65f4ee09851210f95ed52e05c26268185a"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.750040 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-f4974" event={"ID":"d4df8836-f797-48b9-905f-5790efb2e6af","Type":"ContainerStarted","Data":"f15abf4b25ac92a8f881fac007a5d03f98360467b78729c497dbced3a6388478"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.751710 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" event={"ID":"eaa533e2-d1a2-4931-9a85-2584a1d06c96","Type":"ContainerStarted","Data":"0f318dc20eecbf9a7df740abb1c2ff8d0463f0fd739608c4daa06158654e74d3"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.752460 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-zqpbf" event={"ID":"5d332a03-7aae-43d2-b644-bdffb3c9b992","Type":"ContainerStarted","Data":"3560d69bcf8b5b0d9f7b8e0130043452342e44491594aaab7f57413718a9bfb3"} Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.799545 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:23 crc kubenswrapper[5113]: E1212 14:12:23.799725 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:24.299693375 +0000 UTC m=+127.134943202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.799913 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:23 crc kubenswrapper[5113]: E1212 14:12:23.800315 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:24.300298674 +0000 UTC m=+127.135548501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.878057 5113 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-btqmz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 12 14:12:23 crc kubenswrapper[5113]: [+]log ok Dec 12 14:12:23 crc kubenswrapper[5113]: [+]etcd ok Dec 12 14:12:23 crc kubenswrapper[5113]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 12 14:12:23 crc kubenswrapper[5113]: [+]poststarthook/generic-apiserver-start-informers ok Dec 12 14:12:23 crc kubenswrapper[5113]: [+]poststarthook/max-in-flight-filter ok Dec 12 14:12:23 crc kubenswrapper[5113]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 12 14:12:23 crc kubenswrapper[5113]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 12 14:12:23 crc kubenswrapper[5113]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 12 14:12:23 crc kubenswrapper[5113]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 12 14:12:23 crc kubenswrapper[5113]: [+]poststarthook/project.openshift.io-projectcache ok Dec 12 14:12:23 crc kubenswrapper[5113]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 12 14:12:23 crc kubenswrapper[5113]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Dec 12 14:12:23 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 12 14:12:23 crc kubenswrapper[5113]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 12 14:12:23 crc kubenswrapper[5113]: livez check failed Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.878148 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" podUID="ebccfef3-840e-48d6-9da8-c61502a955fa" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.924912 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:23 crc kubenswrapper[5113]: E1212 14:12:23.925227 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:24.425199099 +0000 UTC m=+127.260448926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.927027 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-95skb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:23 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 12 14:12:23 crc kubenswrapper[5113]: [+]process-running ok Dec 12 14:12:23 crc kubenswrapper[5113]: healthz check failed Dec 12 14:12:23 crc kubenswrapper[5113]: I1212 14:12:23.927080 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-95skb" podUID="cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.026822 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.027228 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:24.527210814 +0000 UTC m=+127.362460641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.128848 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.129038 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:24.629003963 +0000 UTC m=+127.464253790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.129561 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.129943 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:24.629935884 +0000 UTC m=+127.465185711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.230803 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.231150 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:24.731108753 +0000 UTC m=+127.566358580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.370545 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.370927 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:24.870893044 +0000 UTC m=+127.706142871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.471795 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.471996 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:24.971955158 +0000 UTC m=+127.807204985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.472277 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.472598 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:24.972590899 +0000 UTC m=+127.807840726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.573041 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.573682 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:25.073657235 +0000 UTC m=+127.908907062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.573789 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.574143 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:25.07411433 +0000 UTC m=+127.909364157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.674881 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.675043 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:25.1750201 +0000 UTC m=+128.010269927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.675168 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.675496 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:25.175484965 +0000 UTC m=+128.010734792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.784224 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.784636 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:25.284614184 +0000 UTC m=+128.119864011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.885980 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.886380 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:25.386364371 +0000 UTC m=+128.221614208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.888715 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" event={"ID":"ee6f605d-f530-448b-825d-cd7dedd4c632","Type":"ContainerStarted","Data":"ddc40e33f6bd52095414c665754e6589d31964545ab59a409b388eaea73cebfe"} Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.897782 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-95skb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:24 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 12 14:12:24 crc kubenswrapper[5113]: [+]process-running ok Dec 12 14:12:24 crc kubenswrapper[5113]: healthz check failed Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.897846 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-95skb" podUID="cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.923696 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v8bxf" podStartSLOduration=106.923678331 podStartE2EDuration="1m46.923678331s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:24.92179389 +0000 UTC m=+127.757043727" watchObservedRunningTime="2025-12-12 14:12:24.923678331 +0000 UTC m=+127.758928158" Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.948470 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-brp2v" event={"ID":"b1779ada-55ca-4d92-acd1-7a2a9ff46b6f","Type":"ContainerStarted","Data":"fde7de61d51d89521429100fea9b5d82ada4cb44fb989535ece5fc49f1b3025e"} Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.986702 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:24 crc kubenswrapper[5113]: E1212 14:12:24.987612 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:25.487595271 +0000 UTC m=+128.322845098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.987696 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-brp2v" podStartSLOduration=10.987672544 podStartE2EDuration="10.987672544s" podCreationTimestamp="2025-12-12 14:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:24.985801023 +0000 UTC m=+127.821050860" watchObservedRunningTime="2025-12-12 14:12:24.987672544 +0000 UTC m=+127.822922371" Dec 12 14:12:24 crc kubenswrapper[5113]: I1212 14:12:24.991575 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" event={"ID":"467a9df6-b909-453f-8a3a-8fec3fd4b54f","Type":"ContainerStarted","Data":"635fb52ed48fcb1a9068dd92794f5538b05fdf1a18fafb73826d8787e33aea07"} Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.000063 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" event={"ID":"9d3eb42d-abaa-44da-9509-6c37f51f2cd9","Type":"ContainerStarted","Data":"15567a5d4bd4595cd9b9cf47397511a1af02640dd4526bce73bccaf95093222b"} Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.001604 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" event={"ID":"ff561547-8b9b-476c-8bc4-2fe57d56bcc0","Type":"ContainerStarted","Data":"ea4929e36254cf4a06ee71566112e3c6bd6f6bb6a077974cf666714959a7c787"} Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.002832 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" event={"ID":"1c12c797-4410-4399-8ea4-138a53a8ef49","Type":"ContainerStarted","Data":"52788249c940f7f15a8c61b871adb5631f1af37b61a17a44a1a0d07c80d0aea8"} Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.005349 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4nblp" event={"ID":"66b65774-248e-407d-9e9d-c9f770175654","Type":"ContainerStarted","Data":"c836b050aeeb5a0428c8da797aff3052b48e9f74b8382ca42b064e40b5bd798e"} Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.024187 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" event={"ID":"a5d19b96-2e13-4b63-9055-7f8c8eb785fb","Type":"ContainerStarted","Data":"dd1784f7e05e9b53865fe09adb0c07363db01f0454b5dc46fbb1117f435a4b41"} Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.028857 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.028899 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.057776 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4" event={"ID":"0bac4af3-97cb-49b6-bd5a-c238ce8aefe0","Type":"ContainerStarted","Data":"3070afcd59373a76e9b1f59f8d601d3c15abc065461ddb61b77c3d23287b7870"} Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.059143 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.062520 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" podStartSLOduration=107.062501662 podStartE2EDuration="1m47.062501662s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:25.059620818 +0000 UTC m=+127.894870655" watchObservedRunningTime="2025-12-12 14:12:25.062501662 +0000 UTC m=+127.897751489" Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.065913 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.086983 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9656x" podStartSLOduration=107.086961451 podStartE2EDuration="1m47.086961451s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:25.084659466 +0000 UTC m=+127.919909293" watchObservedRunningTime="2025-12-12 14:12:25.086961451 +0000 UTC m=+127.922211278" Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.089072 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.091083 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:25 crc kubenswrapper[5113]: E1212 14:12:25.092893 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:25.592874945 +0000 UTC m=+128.428124842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.149821 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-gnb6x" podStartSLOduration=107.149799366 podStartE2EDuration="1m47.149799366s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:25.146463908 +0000 UTC m=+127.981713755" watchObservedRunningTime="2025-12-12 14:12:25.149799366 +0000 UTC m=+127.985049193" Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.173301 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qnk64" podStartSLOduration=107.173282174 podStartE2EDuration="1m47.173282174s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:25.171399733 +0000 UTC m=+128.006649570" watchObservedRunningTime="2025-12-12 14:12:25.173282174 +0000 UTC m=+128.008532001" Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.195748 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:25 crc kubenswrapper[5113]: E1212 14:12:25.196525 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:25.696502224 +0000 UTC m=+128.531752051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.225481 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" podStartSLOduration=107.22546386 podStartE2EDuration="1m47.22546386s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:25.206689567 +0000 UTC m=+128.041939404" watchObservedRunningTime="2025-12-12 14:12:25.22546386 +0000 UTC m=+128.060713687" Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.227217 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v56j9" podStartSLOduration=107.227210028 podStartE2EDuration="1m47.227210028s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:25.223833717 +0000 UTC m=+128.059083564" watchObservedRunningTime="2025-12-12 14:12:25.227210028 +0000 UTC m=+128.062459865" Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.298398 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:25 crc kubenswrapper[5113]: E1212 14:12:25.298829 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:25.798814129 +0000 UTC m=+128.634063956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.383872 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.400019 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:25 crc kubenswrapper[5113]: E1212 14:12:25.400401 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:25.900382101 +0000 UTC m=+128.735631928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.517012 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:25 crc kubenswrapper[5113]: E1212 14:12:25.517642 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:26.017609635 +0000 UTC m=+128.852859462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.578616 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rlsj9"] Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.619508 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:25 crc kubenswrapper[5113]: E1212 14:12:25.620121 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:26.120099636 +0000 UTC m=+128.955349463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.721567 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:25 crc kubenswrapper[5113]: E1212 14:12:25.722060 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:26.22203923 +0000 UTC m=+129.057289057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.844959 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:25 crc kubenswrapper[5113]: E1212 14:12:25.845426 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:26.345401764 +0000 UTC m=+129.180651591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.903976 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-95skb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:25 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 12 14:12:25 crc kubenswrapper[5113]: [+]process-running ok Dec 12 14:12:25 crc kubenswrapper[5113]: healthz check failed Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.904039 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-95skb" podUID="cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:25 crc kubenswrapper[5113]: I1212 14:12:25.947377 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:25 crc kubenswrapper[5113]: E1212 14:12:25.947945 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:26.447928167 +0000 UTC m=+129.283177994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.031640 5113 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-v88kj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.031947 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" podUID="a5d19b96-2e13-4b63-9055-7f8c8eb785fb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.048861 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:26 crc kubenswrapper[5113]: E1212 14:12:26.050097 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:26.549976705 +0000 UTC m=+129.385226552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.104709 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" event={"ID":"3786e9cd-3b8f-45ca-a319-5bc9aa14fdd1","Type":"ContainerStarted","Data":"3926761c639662f70384a17b1005adb6e10bdf29467eda96f237603abde89c40"} Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.113550 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" event={"ID":"b8a7731c-7bfe-4a87-bfec-8e24fc0ab258","Type":"ContainerStarted","Data":"9388404726a9fc74dd25b243ab32e39e5489a95b631ae7f10a98a46a6a70351e"} Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.115734 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" event={"ID":"2a79044c-9f1f-4d59-8e63-e138868ebdd2","Type":"ContainerStarted","Data":"34f8367724beb104595df0500c2918e0f309a0ec4b07b181a340fbcf28be50f5"} Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.151113 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:26 crc kubenswrapper[5113]: E1212 14:12:26.151476 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:26.651461523 +0000 UTC m=+129.486711350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.157178 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" event={"ID":"3d4e952f-6916-451e-aa61-2975f38fa7f4","Type":"ContainerStarted","Data":"1bdafd0df8ba7edc9d6baaf7249fd93deaaed9fc6118a2744368dbcbc7cd64a5"} Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.157681 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.160219 5113 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-7skgn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.160268 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" podUID="3d4e952f-6916-451e-aa61-2975f38fa7f4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.162857 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-wnqfw" event={"ID":"4723fa2f-a114-4d27-875f-951678d39dde","Type":"ContainerStarted","Data":"4ea4965a434bdd97000eeb4b5c0717f710b79bbb2c9cca89c6af0dfc9345ba13"} Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.163206 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-wnqfw" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.184634 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" event={"ID":"35aac665-ec48-4b0f-8e0a-b5b8b173ca1e","Type":"ContainerStarted","Data":"19e586e2dda43eac35510a9cd66a6a784047d48c8f33a1be4972da8163c4c46f"} Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.201539 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.201603 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.261259 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:26 crc kubenswrapper[5113]: E1212 14:12:26.262722 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:26.762695201 +0000 UTC m=+129.597945028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.293734 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-zx9gm" event={"ID":"edd2186b-b29e-49dd-8b4f-ed2081fac2d4","Type":"ContainerStarted","Data":"fb8f3604bf68edd98b69aa784be0c66113d0116d0d3dbc5a43d25f86d06ebcce"} Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.313661 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w4zp6" podStartSLOduration=109.313633637 podStartE2EDuration="1m49.313633637s" podCreationTimestamp="2025-12-12 14:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:26.176464621 +0000 UTC m=+129.011714458" watchObservedRunningTime="2025-12-12 14:12:26.313633637 +0000 UTC m=+129.148883464" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.378956 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:26 crc kubenswrapper[5113]: E1212 14:12:26.379409 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:26.879389447 +0000 UTC m=+129.714639284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.408697 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" podStartSLOduration=108.408677855 podStartE2EDuration="1m48.408677855s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:26.374444935 +0000 UTC m=+129.209694792" watchObservedRunningTime="2025-12-12 14:12:26.408677855 +0000 UTC m=+129.243927882" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.408952 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-wnqfw" podStartSLOduration=108.408946784 podStartE2EDuration="1m48.408946784s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:26.406958398 +0000 UTC m=+129.242208225" watchObservedRunningTime="2025-12-12 14:12:26.408946784 +0000 UTC m=+129.244196611" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.459736 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-zx9gm" podStartSLOduration=108.459721743 podStartE2EDuration="1m48.459721743s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:26.457224062 +0000 UTC m=+129.292473899" watchObservedRunningTime="2025-12-12 14:12:26.459721743 +0000 UTC m=+129.294971570" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.464495 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xk8lq"] Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.482777 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:26 crc kubenswrapper[5113]: E1212 14:12:26.483178 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:26.98315419 +0000 UTC m=+129.818404017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.585047 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:26 crc kubenswrapper[5113]: E1212 14:12:26.585731 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.085716004 +0000 UTC m=+129.920965831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.695235 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xk8lq"] Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.695268 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" event={"ID":"76704edc-840b-442f-8f26-4a5c394e5e4f","Type":"ContainerStarted","Data":"be9e2f1d9a7dc85247f035078dc85c10205e0c6aa61757b09ff67cb3f07ea541"} Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.695291 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" event={"ID":"de734fb8-e662-4172-b64e-57bb9b51c606","Type":"ContainerStarted","Data":"1fbc9e6234107e7a0b3a00d4910099ab3fbf1147c6874eb0e1769789b2dcd109"} Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.695319 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" event={"ID":"eaa533e2-d1a2-4931-9a85-2584a1d06c96","Type":"ContainerStarted","Data":"9981712eb722147529dc08d2679ca4fa82012912ac0e70db94212bfcc205a13a"} Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.695436 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.699539 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:26 crc kubenswrapper[5113]: E1212 14:12:26.699904 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.199884348 +0000 UTC m=+130.035134175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.700904 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.705559 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.707613 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gtkxn"] Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.750764 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q57sn" podStartSLOduration=108.750741541 podStartE2EDuration="1m48.750741541s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:26.749544962 +0000 UTC m=+129.584794799" watchObservedRunningTime="2025-12-12 14:12:26.750741541 +0000 UTC m=+129.585991368" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.786087 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4gtx7" podStartSLOduration=108.786074317 podStartE2EDuration="1m48.786074317s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:26.783807463 +0000 UTC m=+129.619057300" watchObservedRunningTime="2025-12-12 14:12:26.786074317 +0000 UTC m=+129.621324144" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.803774 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aefb209-096f-4d97-bbde-df22378e9c13-utilities\") pod \"certified-operators-xk8lq\" (UID: \"7aefb209-096f-4d97-bbde-df22378e9c13\") " pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.803920 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l577t\" (UniqueName: \"kubernetes.io/projected/7aefb209-096f-4d97-bbde-df22378e9c13-kube-api-access-l577t\") pod \"certified-operators-xk8lq\" (UID: \"7aefb209-096f-4d97-bbde-df22378e9c13\") " pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.803956 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.811815 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aefb209-096f-4d97-bbde-df22378e9c13-catalog-content\") pod \"certified-operators-xk8lq\" (UID: \"7aefb209-096f-4d97-bbde-df22378e9c13\") " pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:12:26 crc kubenswrapper[5113]: E1212 14:12:26.826020 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.325986492 +0000 UTC m=+130.161236329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.934155 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.934703 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aefb209-096f-4d97-bbde-df22378e9c13-catalog-content\") pod \"certified-operators-xk8lq\" (UID: \"7aefb209-096f-4d97-bbde-df22378e9c13\") " pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.934772 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aefb209-096f-4d97-bbde-df22378e9c13-utilities\") pod \"certified-operators-xk8lq\" (UID: \"7aefb209-096f-4d97-bbde-df22378e9c13\") " pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.934818 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l577t\" (UniqueName: \"kubernetes.io/projected/7aefb209-096f-4d97-bbde-df22378e9c13-kube-api-access-l577t\") pod \"certified-operators-xk8lq\" (UID: \"7aefb209-096f-4d97-bbde-df22378e9c13\") " pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.935447 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aefb209-096f-4d97-bbde-df22378e9c13-catalog-content\") pod \"certified-operators-xk8lq\" (UID: \"7aefb209-096f-4d97-bbde-df22378e9c13\") " pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:12:26 crc kubenswrapper[5113]: E1212 14:12:26.935673 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.435649928 +0000 UTC m=+130.270899755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.936152 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aefb209-096f-4d97-bbde-df22378e9c13-utilities\") pod \"certified-operators-xk8lq\" (UID: \"7aefb209-096f-4d97-bbde-df22378e9c13\") " pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.960365 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-95skb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:26 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 12 14:12:26 crc kubenswrapper[5113]: [+]process-running ok Dec 12 14:12:26 crc kubenswrapper[5113]: healthz check failed Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.960457 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-95skb" podUID="cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.991971 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-btqmz" Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.992024 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtkxn"] Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.992063 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 14:12:26 crc kubenswrapper[5113]: I1212 14:12:26.994745 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.020890 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l577t\" (UniqueName: \"kubernetes.io/projected/7aefb209-096f-4d97-bbde-df22378e9c13-kube-api-access-l577t\") pod \"certified-operators-xk8lq\" (UID: \"7aefb209-096f-4d97-bbde-df22378e9c13\") " pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.022670 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.025322 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-79b8w" podStartSLOduration=109.02530754 podStartE2EDuration="1m49.02530754s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:26.934595494 +0000 UTC m=+129.769845341" watchObservedRunningTime="2025-12-12 14:12:27.02530754 +0000 UTC m=+129.860557367" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.058907 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.060182 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:27 crc kubenswrapper[5113]: E1212 14:12:27.060565 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.560547973 +0000 UTC m=+130.395797800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.166790 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.167070 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90beb70e-da16-489d-b0c8-3ced9d98deea-utilities\") pod \"community-operators-gtkxn\" (UID: \"90beb70e-da16-489d-b0c8-3ced9d98deea\") " pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.167255 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhgjd\" (UniqueName: \"kubernetes.io/projected/90beb70e-da16-489d-b0c8-3ced9d98deea-kube-api-access-jhgjd\") pod \"community-operators-gtkxn\" (UID: \"90beb70e-da16-489d-b0c8-3ced9d98deea\") " pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.167346 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90beb70e-da16-489d-b0c8-3ced9d98deea-catalog-content\") pod \"community-operators-gtkxn\" (UID: \"90beb70e-da16-489d-b0c8-3ced9d98deea\") " pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:12:27 crc kubenswrapper[5113]: E1212 14:12:27.167530 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.667504141 +0000 UTC m=+130.502753968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.286768 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90beb70e-da16-489d-b0c8-3ced9d98deea-catalog-content\") pod \"community-operators-gtkxn\" (UID: \"90beb70e-da16-489d-b0c8-3ced9d98deea\") " pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.286875 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.287038 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90beb70e-da16-489d-b0c8-3ced9d98deea-utilities\") pod \"community-operators-gtkxn\" (UID: \"90beb70e-da16-489d-b0c8-3ced9d98deea\") " pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.287145 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jhgjd\" (UniqueName: \"kubernetes.io/projected/90beb70e-da16-489d-b0c8-3ced9d98deea-kube-api-access-jhgjd\") pod \"community-operators-gtkxn\" (UID: \"90beb70e-da16-489d-b0c8-3ced9d98deea\") " pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:12:27 crc kubenswrapper[5113]: E1212 14:12:27.288061 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.788043493 +0000 UTC m=+130.623293320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.289071 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90beb70e-da16-489d-b0c8-3ced9d98deea-utilities\") pod \"community-operators-gtkxn\" (UID: \"90beb70e-da16-489d-b0c8-3ced9d98deea\") " pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.302258 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90beb70e-da16-489d-b0c8-3ced9d98deea-catalog-content\") pod \"community-operators-gtkxn\" (UID: \"90beb70e-da16-489d-b0c8-3ced9d98deea\") " pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.369307 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhgjd\" (UniqueName: \"kubernetes.io/projected/90beb70e-da16-489d-b0c8-3ced9d98deea-kube-api-access-jhgjd\") pod \"community-operators-gtkxn\" (UID: \"90beb70e-da16-489d-b0c8-3ced9d98deea\") " pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.388538 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:27 crc kubenswrapper[5113]: E1212 14:12:27.388973 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.888954463 +0000 UTC m=+130.724204290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.416502 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.456366 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-v88kj" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.456416 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.456442 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6qwjt"] Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.457643 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.483678 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.483783 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.489939 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:27 crc kubenswrapper[5113]: E1212 14:12:27.490460 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:27.990443981 +0000 UTC m=+130.825693798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.593657 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.593845 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/def2dc04-1b99-4546-a73f-2f956f9527e1-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"def2dc04-1b99-4546-a73f-2f956f9527e1\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.593895 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/def2dc04-1b99-4546-a73f-2f956f9527e1-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"def2dc04-1b99-4546-a73f-2f956f9527e1\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:27 crc kubenswrapper[5113]: E1212 14:12:27.594564 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:28.094535195 +0000 UTC m=+130.929785022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.693000 5113 generic.go:358] "Generic (PLEG): container finished" podID="2a79044c-9f1f-4d59-8e63-e138868ebdd2" containerID="34f8367724beb104595df0500c2918e0f309a0ec4b07b181a340fbcf28be50f5" exitCode=0 Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.695090 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/def2dc04-1b99-4546-a73f-2f956f9527e1-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"def2dc04-1b99-4546-a73f-2f956f9527e1\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.695222 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.695258 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/def2dc04-1b99-4546-a73f-2f956f9527e1-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"def2dc04-1b99-4546-a73f-2f956f9527e1\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.695678 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/def2dc04-1b99-4546-a73f-2f956f9527e1-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"def2dc04-1b99-4546-a73f-2f956f9527e1\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:27 crc kubenswrapper[5113]: E1212 14:12:27.695929 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:28.195914421 +0000 UTC m=+131.031164248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.782579 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/def2dc04-1b99-4546-a73f-2f956f9527e1-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"def2dc04-1b99-4546-a73f-2f956f9527e1\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.795867 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:27 crc kubenswrapper[5113]: E1212 14:12:27.796229 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:28.29618845 +0000 UTC m=+131.131438277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.878144 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.901206 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:27 crc kubenswrapper[5113]: E1212 14:12:27.901603 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:28.401587606 +0000 UTC m=+131.236837433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.909715 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-95skb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:27 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 12 14:12:27 crc kubenswrapper[5113]: [+]process-running ok Dec 12 14:12:27 crc kubenswrapper[5113]: healthz check failed Dec 12 14:12:27 crc kubenswrapper[5113]: I1212 14:12:27.909756 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-95skb" podUID="cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.003420 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.004063 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:28.504036717 +0000 UTC m=+131.339286544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.108524 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.109398 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:28.609384112 +0000 UTC m=+131.444633939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.210894 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.211185 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:28.711167071 +0000 UTC m=+131.546416898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.312423 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.312885 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:28.812862857 +0000 UTC m=+131.648112684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.414227 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.414642 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:28.914621384 +0000 UTC m=+131.749871211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.515958 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.516394 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.016376212 +0000 UTC m=+131.851626039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.617399 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.617772 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.117728777 +0000 UTC m=+131.952978604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.618271 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.618658 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.118647566 +0000 UTC m=+131.953897393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.719966 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.720094 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.220073033 +0000 UTC m=+132.055322850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.720306 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.720644 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.220630272 +0000 UTC m=+132.055880099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.821873 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.822107 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.32208524 +0000 UTC m=+132.157335067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.822520 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.822987 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.322974598 +0000 UTC m=+132.158224425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.897298 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-95skb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:28 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 12 14:12:28 crc kubenswrapper[5113]: [+]process-running ok Dec 12 14:12:28 crc kubenswrapper[5113]: healthz check failed Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.897417 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-95skb" podUID="cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.924332 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.924521 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.424493608 +0000 UTC m=+132.259743435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:28 crc kubenswrapper[5113]: I1212 14:12:28.924882 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:28 crc kubenswrapper[5113]: E1212 14:12:28.925335 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.425316776 +0000 UTC m=+132.260566603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.026624 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.026754 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.526727142 +0000 UTC m=+132.361976979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.027093 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.027394 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.527382273 +0000 UTC m=+132.362632110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.128600 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.128812 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.628773729 +0000 UTC m=+132.464023556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.129082 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.129498 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.629480352 +0000 UTC m=+132.464730179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.230543 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.230721 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.730690302 +0000 UTC m=+132.565940129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.230912 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.231274 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.73126527 +0000 UTC m=+132.566515097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.332888 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.333041 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.833013988 +0000 UTC m=+132.668263815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.333353 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.333951 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.833937029 +0000 UTC m=+132.669186856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.435091 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.435329 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.935289113 +0000 UTC m=+132.770538940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.435945 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.436387 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" event={"ID":"2a79044c-9f1f-4d59-8e63-e138868ebdd2","Type":"ContainerDied","Data":"34f8367724beb104595df0500c2918e0f309a0ec4b07b181a340fbcf28be50f5"} Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.436446 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:29.93643385 +0000 UTC m=+132.771683677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.436469 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"c2aa85b8391b9f3b99c140e84150a2419c08455e735603c8f0fb5175f7d409be"} Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.436794 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.436818 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qwjt"] Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.436835 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4" event={"ID":"0bac4af3-97cb-49b6-bd5a-c238ce8aefe0","Type":"ContainerStarted","Data":"5d7a9288f457a1039c14762f7be53a7860bc8674094d8551e45c46309016811e"} Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.436855 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-zqpbf" event={"ID":"5d332a03-7aae-43d2-b644-bdffb3c9b992","Type":"ContainerStarted","Data":"2a82425430503a59eb338ab50dae324e5a7a7c331572bf1aee851a83606d2185"} Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.438246 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.452053 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-94xtr"] Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.538682 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.539340 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3128ee41-7d4c-448e-b5f1-8c73827104e9-utilities\") pod \"certified-operators-6qwjt\" (UID: \"3128ee41-7d4c-448e-b5f1-8c73827104e9\") " pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.539432 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3128ee41-7d4c-448e-b5f1-8c73827104e9-catalog-content\") pod \"certified-operators-6qwjt\" (UID: \"3128ee41-7d4c-448e-b5f1-8c73827104e9\") " pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.539466 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j5w7\" (UniqueName: \"kubernetes.io/projected/3128ee41-7d4c-448e-b5f1-8c73827104e9-kube-api-access-8j5w7\") pod \"certified-operators-6qwjt\" (UID: \"3128ee41-7d4c-448e-b5f1-8c73827104e9\") " pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.539916 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:30.039891993 +0000 UTC m=+132.875141830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.640761 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.640843 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3128ee41-7d4c-448e-b5f1-8c73827104e9-catalog-content\") pod \"certified-operators-6qwjt\" (UID: \"3128ee41-7d4c-448e-b5f1-8c73827104e9\") " pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.640865 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8j5w7\" (UniqueName: \"kubernetes.io/projected/3128ee41-7d4c-448e-b5f1-8c73827104e9-kube-api-access-8j5w7\") pod \"certified-operators-6qwjt\" (UID: \"3128ee41-7d4c-448e-b5f1-8c73827104e9\") " pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.640942 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3128ee41-7d4c-448e-b5f1-8c73827104e9-utilities\") pod \"certified-operators-6qwjt\" (UID: \"3128ee41-7d4c-448e-b5f1-8c73827104e9\") " pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.641349 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3128ee41-7d4c-448e-b5f1-8c73827104e9-utilities\") pod \"certified-operators-6qwjt\" (UID: \"3128ee41-7d4c-448e-b5f1-8c73827104e9\") " pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.641615 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:30.14159947 +0000 UTC m=+132.976849287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.641975 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3128ee41-7d4c-448e-b5f1-8c73827104e9-catalog-content\") pod \"certified-operators-6qwjt\" (UID: \"3128ee41-7d4c-448e-b5f1-8c73827104e9\") " pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.687359 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j5w7\" (UniqueName: \"kubernetes.io/projected/3128ee41-7d4c-448e-b5f1-8c73827104e9-kube-api-access-8j5w7\") pod \"certified-operators-6qwjt\" (UID: \"3128ee41-7d4c-448e-b5f1-8c73827104e9\") " pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.742747 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.743315 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:30.243296276 +0000 UTC m=+133.078546103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.775262 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.847008 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.847355 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:30.347341888 +0000 UTC m=+133.182591705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.898028 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-95skb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 14:12:29 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 12 14:12:29 crc kubenswrapper[5113]: [+]process-running ok Dec 12 14:12:29 crc kubenswrapper[5113]: healthz check failed Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.898404 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-95skb" podUID="cbad4b17-cb64-4d28-8fc5-6c1e002cb1ea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.944220 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" podUID="5311c643-bfa2-4959-bc65-a6e4e4f5cd22" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" gracePeriod=30 Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.948828 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:29 crc kubenswrapper[5113]: E1212 14:12:29.949059 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:30.449037194 +0000 UTC m=+133.284287021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.965005 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94xtr"] Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.968745 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.981472 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.981558 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.981660 5113 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-7skgn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Dec 12 14:12:29 crc kubenswrapper[5113]: I1212 14:12:29.981778 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" podUID="3d4e952f-6916-451e-aa61-2975f38fa7f4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.023913 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.023963 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-n69j2" event={"ID":"7cedf898-adef-4e24-9d76-b6a19006883c","Type":"ContainerStarted","Data":"3f299890ee64d5f1c6f345859013d23110a3ebbbf95e5529cf67b3242fe38ca1"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.023986 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"58431747bea6d310ef069b371deaebc1e5e30087a972bc540015f25dc861eaa9"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.024003 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"def2dc04-1b99-4546-a73f-2f956f9527e1","Type":"ContainerStarted","Data":"be07cc88d0dedbe43dea9595cba58a5b9a9146cd41a061fc90fd900bc42cee22"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.024012 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"7027eb522ef85e5b512fdab1c22e22ad2d0f94a0ee1e9ea9eb511a8c4faadcf3"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.024024 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtkxn" event={"ID":"90beb70e-da16-489d-b0c8-3ced9d98deea","Type":"ContainerStarted","Data":"be5c600ae31a3cd6598ccd87f0818ab59cad228dc1acb86f237c89f72be2587f"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.024035 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8lq" event={"ID":"7aefb209-096f-4d97-bbde-df22378e9c13","Type":"ContainerStarted","Data":"ac2cbedf98ccc74d505af2bbf40b1777cf16ba37ad53d64c4cd995cbeba5d2de"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.024067 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xk8lq"] Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.024100 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.024109 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ppmfs"] Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.049862 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ca03d9-d718-4adb-8c84-0386e98421c2-catalog-content\") pod \"community-operators-94xtr\" (UID: \"26ca03d9-d718-4adb-8c84-0386e98421c2\") " pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.049949 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.050099 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ca03d9-d718-4adb-8c84-0386e98421c2-utilities\") pod \"community-operators-94xtr\" (UID: \"26ca03d9-d718-4adb-8c84-0386e98421c2\") " pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.050203 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vdl6\" (UniqueName: \"kubernetes.io/projected/26ca03d9-d718-4adb-8c84-0386e98421c2-kube-api-access-5vdl6\") pod \"community-operators-94xtr\" (UID: \"26ca03d9-d718-4adb-8c84-0386e98421c2\") " pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:12:30 crc kubenswrapper[5113]: E1212 14:12:30.055223 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:30.555206265 +0000 UTC m=+133.390456092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.133972 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-zqpbf" podStartSLOduration=112.133956081 podStartE2EDuration="1m52.133956081s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:30.131017545 +0000 UTC m=+132.966267382" watchObservedRunningTime="2025-12-12 14:12:30.133956081 +0000 UTC m=+132.969205908" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.151782 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.152020 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ca03d9-d718-4adb-8c84-0386e98421c2-catalog-content\") pod \"community-operators-94xtr\" (UID: \"26ca03d9-d718-4adb-8c84-0386e98421c2\") " pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.152065 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ca03d9-d718-4adb-8c84-0386e98421c2-utilities\") pod \"community-operators-94xtr\" (UID: \"26ca03d9-d718-4adb-8c84-0386e98421c2\") " pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.152088 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5vdl6\" (UniqueName: \"kubernetes.io/projected/26ca03d9-d718-4adb-8c84-0386e98421c2-kube-api-access-5vdl6\") pod \"community-operators-94xtr\" (UID: \"26ca03d9-d718-4adb-8c84-0386e98421c2\") " pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:12:30 crc kubenswrapper[5113]: E1212 14:12:30.152511 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:30.652469926 +0000 UTC m=+133.487719763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.152781 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ca03d9-d718-4adb-8c84-0386e98421c2-utilities\") pod \"community-operators-94xtr\" (UID: \"26ca03d9-d718-4adb-8c84-0386e98421c2\") " pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.153050 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ca03d9-d718-4adb-8c84-0386e98421c2-catalog-content\") pod \"community-operators-94xtr\" (UID: \"26ca03d9-d718-4adb-8c84-0386e98421c2\") " pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.225221 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vdl6\" (UniqueName: \"kubernetes.io/projected/26ca03d9-d718-4adb-8c84-0386e98421c2-kube-api-access-5vdl6\") pod \"community-operators-94xtr\" (UID: \"26ca03d9-d718-4adb-8c84-0386e98421c2\") " pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.253019 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:30 crc kubenswrapper[5113]: E1212 14:12:30.253361 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:30.753349055 +0000 UTC m=+133.588598882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.275922 5113 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-7skgn container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.275981 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" podUID="3d4e952f-6916-451e-aa61-2975f38fa7f4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.330490 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.357305 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:30 crc kubenswrapper[5113]: E1212 14:12:30.357478 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:30.857423819 +0000 UTC m=+133.692673646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.357655 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:30 crc kubenswrapper[5113]: E1212 14:12:30.358086 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:30.85807271 +0000 UTC m=+133.693322537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.459993 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:30 crc kubenswrapper[5113]: E1212 14:12:30.460186 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:30.960158209 +0000 UTC m=+133.795408036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.460622 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:30 crc kubenswrapper[5113]: E1212 14:12:30.460945 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:30.960937724 +0000 UTC m=+133.796187551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.528382 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" event={"ID":"ac816b90-036c-4024-a177-f6e32b250393","Type":"ContainerStarted","Data":"a4b8731daba6c4214650c0c84ae8526d2af6587da78f6971b4ed8cede07a8e11"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.528742 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" event={"ID":"ff561547-8b9b-476c-8bc4-2fe57d56bcc0","Type":"ContainerStarted","Data":"5bdbcfef32236bb1aa93e2b99b781881dc88aa85e2c4824a495b7497e2850b74"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.528760 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jvcjp" event={"ID":"357b225a-0c71-40ba-ac24-d769a9ff3f07","Type":"ContainerStarted","Data":"c08f7d9437b5d5ccfd2c237e8e4236c6f79fbde04b88a0b5c731e66c601645ae"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.528778 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" event={"ID":"c087108f-8a95-41df-be4f-7f61b16f4b74","Type":"ContainerStarted","Data":"33c57cc0437645f064e1722b4c822fa6ddc54e9d4df6dfac9a49671a387dcc08"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.528790 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" event={"ID":"35aac665-ec48-4b0f-8e0a-b5b8b173ca1e","Type":"ContainerStarted","Data":"f779019b57f75289ad758af665ffcbcee5629da07ca017f794b9304c53a4a2ff"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.528555 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.528806 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4nblp" event={"ID":"66b65774-248e-407d-9e9d-c9f770175654","Type":"ContainerStarted","Data":"a83cdc4171a796758f45faf624450594c0af020a8269a75a79b36f3012410bc8"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.528931 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" event={"ID":"1db49e24-69a6-47c8-b689-df0b1754efac","Type":"ContainerStarted","Data":"3aa5b6ff97fe5d13462e1a9df23fff11827cf125c6ebfd853adfe19c1f6a721e"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.528976 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-f4974" event={"ID":"d4df8836-f797-48b9-905f-5790efb2e6af","Type":"ContainerStarted","Data":"9f7324bf463a349fea345097f371928359d7536bff34562b53f0a675d26b9718"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.528987 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-n69j2" event={"ID":"7cedf898-adef-4e24-9d76-b6a19006883c","Type":"ContainerStarted","Data":"f00f80e6b90ecf1a516fed772853f561847c04b83abf881d170b836e7ded017e"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.529000 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppmfs"] Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.529195 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" event={"ID":"41890204-5a81-44d5-99eb-82be690cc03d","Type":"ContainerStarted","Data":"6fb668b1505d4cfe65f1fb1eb720b3d1fb598ab2ef0dba70b18db3d1ed1b1f6e"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.529280 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rlhjs" event={"ID":"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f","Type":"ContainerStarted","Data":"0356c1b640cde9eb70af926f6b88e585622065ab2dfcaae07c86e5bc86627229"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.529336 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtkxn"] Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.529386 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mc7wk"] Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.541416 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.542077 5113 patch_prober.go:28] interesting pod/console-operator-67c89758df-f4974 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.542192 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-f4974" podUID="d4df8836-f797-48b9-905f-5790efb2e6af" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.551337 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.551377 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" event={"ID":"b8a7731c-7bfe-4a87-bfec-8e24fc0ab258","Type":"ContainerStarted","Data":"121bea51053fd8120f7abc6de472f2b8a8fac89152a77973af91f04bc9d36907"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.551406 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" event={"ID":"00752359-7fda-4a4a-bddd-47c8c0939d7f","Type":"ContainerStarted","Data":"305688e3390bb3e9bdb5333d40ddda7d832725bf448d06b0c03d56f70731bf4c"} Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.551432 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mc7wk"] Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.551460 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.551482 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.551501 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rzcm5"] Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.551600 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.562266 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:30 crc kubenswrapper[5113]: E1212 14:12:30.562638 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:31.062615923 +0000 UTC m=+133.897865750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.611644 5113 patch_prober.go:28] interesting pod/console-64d44f6ddf-zx9gm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.611744 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-zx9gm" podUID="edd2186b-b29e-49dd-8b4f-ed2081fac2d4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.642822 5113 patch_prober.go:28] interesting pod/console-operator-67c89758df-f4974 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.642889 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-67c89758df-f4974" podUID="d4df8836-f797-48b9-905f-5790efb2e6af" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.666246 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.666350 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1599be11-4b1f-4016-b780-1f93afc71aad-utilities\") pod \"redhat-marketplace-mc7wk\" (UID: \"1599be11-4b1f-4016-b780-1f93afc71aad\") " pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.666482 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a2741c8-968d-4f77-8be2-35619f1b1f4d-utilities\") pod \"redhat-marketplace-ppmfs\" (UID: \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\") " pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.666541 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nzsw\" (UniqueName: \"kubernetes.io/projected/1599be11-4b1f-4016-b780-1f93afc71aad-kube-api-access-8nzsw\") pod \"redhat-marketplace-mc7wk\" (UID: \"1599be11-4b1f-4016-b780-1f93afc71aad\") " pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.666594 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a2741c8-968d-4f77-8be2-35619f1b1f4d-catalog-content\") pod \"redhat-marketplace-ppmfs\" (UID: \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\") " pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.666725 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1599be11-4b1f-4016-b780-1f93afc71aad-catalog-content\") pod \"redhat-marketplace-mc7wk\" (UID: \"1599be11-4b1f-4016-b780-1f93afc71aad\") " pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.666786 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzfsf\" (UniqueName: \"kubernetes.io/projected/2a2741c8-968d-4f77-8be2-35619f1b1f4d-kube-api-access-jzfsf\") pod \"redhat-marketplace-ppmfs\" (UID: \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\") " pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:12:30 crc kubenswrapper[5113]: E1212 14:12:30.679972 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:31.179954514 +0000 UTC m=+134.015204341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.691246 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-4nblp" podStartSLOduration=112.691226687 podStartE2EDuration="1m52.691226687s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:30.6879665 +0000 UTC m=+133.523216347" watchObservedRunningTime="2025-12-12 14:12:30.691226687 +0000 UTC m=+133.526476514" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.692864 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7brq4" podStartSLOduration=112.692855884 podStartE2EDuration="1m52.692855884s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:30.635715534 +0000 UTC m=+133.470965381" watchObservedRunningTime="2025-12-12 14:12:30.692855884 +0000 UTC m=+133.528105711" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.734165 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-sntfz" podStartSLOduration=112.734110569 podStartE2EDuration="1m52.734110569s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:30.732741139 +0000 UTC m=+133.567990976" watchObservedRunningTime="2025-12-12 14:12:30.734110569 +0000 UTC m=+133.569360396" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.738275 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.738397 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.768647 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.768975 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a2741c8-968d-4f77-8be2-35619f1b1f4d-utilities\") pod \"redhat-marketplace-ppmfs\" (UID: \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\") " pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.769012 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nzsw\" (UniqueName: \"kubernetes.io/projected/1599be11-4b1f-4016-b780-1f93afc71aad-kube-api-access-8nzsw\") pod \"redhat-marketplace-mc7wk\" (UID: \"1599be11-4b1f-4016-b780-1f93afc71aad\") " pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.769043 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a2741c8-968d-4f77-8be2-35619f1b1f4d-catalog-content\") pod \"redhat-marketplace-ppmfs\" (UID: \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\") " pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.769108 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1599be11-4b1f-4016-b780-1f93afc71aad-catalog-content\") pod \"redhat-marketplace-mc7wk\" (UID: \"1599be11-4b1f-4016-b780-1f93afc71aad\") " pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.769179 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jzfsf\" (UniqueName: \"kubernetes.io/projected/2a2741c8-968d-4f77-8be2-35619f1b1f4d-kube-api-access-jzfsf\") pod \"redhat-marketplace-ppmfs\" (UID: \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\") " pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.769241 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1599be11-4b1f-4016-b780-1f93afc71aad-utilities\") pod \"redhat-marketplace-mc7wk\" (UID: \"1599be11-4b1f-4016-b780-1f93afc71aad\") " pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:12:30 crc kubenswrapper[5113]: E1212 14:12:30.774074 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:31.274044594 +0000 UTC m=+134.109294431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.774540 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a2741c8-968d-4f77-8be2-35619f1b1f4d-utilities\") pod \"redhat-marketplace-ppmfs\" (UID: \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\") " pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.776737 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1599be11-4b1f-4016-b780-1f93afc71aad-catalog-content\") pod \"redhat-marketplace-mc7wk\" (UID: \"1599be11-4b1f-4016-b780-1f93afc71aad\") " pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.777051 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a2741c8-968d-4f77-8be2-35619f1b1f4d-catalog-content\") pod \"redhat-marketplace-ppmfs\" (UID: \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\") " pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.789731 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1599be11-4b1f-4016-b780-1f93afc71aad-utilities\") pod \"redhat-marketplace-mc7wk\" (UID: \"1599be11-4b1f-4016-b780-1f93afc71aad\") " pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.832427 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nzsw\" (UniqueName: \"kubernetes.io/projected/1599be11-4b1f-4016-b780-1f93afc71aad-kube-api-access-8nzsw\") pod \"redhat-marketplace-mc7wk\" (UID: \"1599be11-4b1f-4016-b780-1f93afc71aad\") " pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.845059 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzfsf\" (UniqueName: \"kubernetes.io/projected/2a2741c8-968d-4f77-8be2-35619f1b1f4d-kube-api-access-jzfsf\") pod \"redhat-marketplace-ppmfs\" (UID: \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\") " pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.853191 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.853540 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-f4974" podStartSLOduration=112.853521004 podStartE2EDuration="1m52.853521004s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:30.852776737 +0000 UTC m=+133.688026574" watchObservedRunningTime="2025-12-12 14:12:30.853521004 +0000 UTC m=+133.688770831" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.871499 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:30 crc kubenswrapper[5113]: E1212 14:12:30.880492 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:31.380466146 +0000 UTC m=+134.215715973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.885536 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.891542 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-n69j2" podStartSLOduration=112.891527041 podStartE2EDuration="1m52.891527041s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:30.890947901 +0000 UTC m=+133.726197748" watchObservedRunningTime="2025-12-12 14:12:30.891527041 +0000 UTC m=+133.726776868" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.935375 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-jtjg7" podStartSLOduration=112.935326306 podStartE2EDuration="1m52.935326306s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:30.92957318 +0000 UTC m=+133.764823017" watchObservedRunningTime="2025-12-12 14:12:30.935326306 +0000 UTC m=+133.770576133" Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.983172 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:30 crc kubenswrapper[5113]: E1212 14:12:30.983404 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:31.483387402 +0000 UTC m=+134.318637229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:30 crc kubenswrapper[5113]: I1212 14:12:30.992768 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" podStartSLOduration=112.992752716 podStartE2EDuration="1m52.992752716s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:30.964092682 +0000 UTC m=+133.799342529" watchObservedRunningTime="2025-12-12 14:12:30.992752716 +0000 UTC m=+133.828002543" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.008495 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d99w4" podStartSLOduration=113.008472988 podStartE2EDuration="1m53.008472988s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:31.007651399 +0000 UTC m=+133.842901236" watchObservedRunningTime="2025-12-12 14:12:31.008472988 +0000 UTC m=+133.843722825" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.027313 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-sz7rz" podStartSLOduration=113.02729342 podStartE2EDuration="1m53.02729342s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:31.023581347 +0000 UTC m=+133.858831194" watchObservedRunningTime="2025-12-12 14:12:31.02729342 +0000 UTC m=+133.862543247" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.061655 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bvnd7" podStartSLOduration=113.061631207 podStartE2EDuration="1m53.061631207s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:31.05917912 +0000 UTC m=+133.894428967" watchObservedRunningTime="2025-12-12 14:12:31.061631207 +0000 UTC m=+133.896881024" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.091033 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:31 crc kubenswrapper[5113]: E1212 14:12:31.091507 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:31.591490863 +0000 UTC m=+134.426740690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.198110 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:31 crc kubenswrapper[5113]: E1212 14:12:31.198797 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:31.698773325 +0000 UTC m=+134.534023152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.286360 5113 generic.go:358] "Generic (PLEG): container finished" podID="7aefb209-096f-4d97-bbde-df22378e9c13" containerID="c1a3b6741c9fad6520ebbd4700c32a6d4b61d3b7f5173e18afaf332c7e627ccd" exitCode=0 Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.289782 5113 generic.go:358] "Generic (PLEG): container finished" podID="1c12c797-4410-4399-8ea4-138a53a8ef49" containerID="52788249c940f7f15a8c61b871adb5631f1af37b61a17a44a1a0d07c80d0aea8" exitCode=0 Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.302750 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:31 crc kubenswrapper[5113]: E1212 14:12:31.303116 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:31.803099931 +0000 UTC m=+134.638349758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.403776 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:31 crc kubenswrapper[5113]: E1212 14:12:31.404252 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:31.904231183 +0000 UTC m=+134.739481010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:31 crc kubenswrapper[5113]: W1212 14:12:31.499135 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a2741c8_968d_4f77_8be2_35619f1b1f4d.slice/crio-3f21b0ae31c1042edeb6dc5bb326dbab1c7547525da060665d38bac7483ed7c4 WatchSource:0}: Error finding container 3f21b0ae31c1042edeb6dc5bb326dbab1c7547525da060665d38bac7483ed7c4: Status 404 returned error can't find the container with id 3f21b0ae31c1042edeb6dc5bb326dbab1c7547525da060665d38bac7483ed7c4 Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.505747 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:31 crc kubenswrapper[5113]: E1212 14:12:31.506022 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:32.006009659 +0000 UTC m=+134.841259486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.600451 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rzcm5"] Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.600500 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.600514 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.600590 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.604953 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.606933 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:31 crc kubenswrapper[5113]: E1212 14:12:31.606994 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:32.106977975 +0000 UTC m=+134.942227802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.611065 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:31 crc kubenswrapper[5113]: E1212 14:12:31.611513 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:32.111495766 +0000 UTC m=+134.946745593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.612910 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.618439 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8lq" event={"ID":"7aefb209-096f-4d97-bbde-df22378e9c13","Type":"ContainerDied","Data":"c1a3b6741c9fad6520ebbd4700c32a6d4b61d3b7f5173e18afaf332c7e627ccd"} Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.618475 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" event={"ID":"1c12c797-4410-4399-8ea4-138a53a8ef49","Type":"ContainerDied","Data":"52788249c940f7f15a8c61b871adb5631f1af37b61a17a44a1a0d07c80d0aea8"} Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.618487 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g9pnt"] Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.715096 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:31 crc kubenswrapper[5113]: E1212 14:12:31.715305 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:32.215272193 +0000 UTC m=+135.050522030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.715541 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.715579 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eab91f0e-3c39-4096-8d91-329f7e9812e8-utilities\") pod \"redhat-operators-rzcm5\" (UID: \"eab91f0e-3c39-4096-8d91-329f7e9812e8\") " pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.715604 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhm67\" (UniqueName: \"kubernetes.io/projected/eab91f0e-3c39-4096-8d91-329f7e9812e8-kube-api-access-zhm67\") pod \"redhat-operators-rzcm5\" (UID: \"eab91f0e-3c39-4096-8d91-329f7e9812e8\") " pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.715632 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eab91f0e-3c39-4096-8d91-329f7e9812e8-catalog-content\") pod \"redhat-operators-rzcm5\" (UID: \"eab91f0e-3c39-4096-8d91-329f7e9812e8\") " pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:12:31 crc kubenswrapper[5113]: E1212 14:12:31.715940 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:32.215921726 +0000 UTC m=+135.051171643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.817369 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:31 crc kubenswrapper[5113]: E1212 14:12:31.817577 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:32.317544877 +0000 UTC m=+135.152794704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.817778 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.817823 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eab91f0e-3c39-4096-8d91-329f7e9812e8-utilities\") pod \"redhat-operators-rzcm5\" (UID: \"eab91f0e-3c39-4096-8d91-329f7e9812e8\") " pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.818201 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhm67\" (UniqueName: \"kubernetes.io/projected/eab91f0e-3c39-4096-8d91-329f7e9812e8-kube-api-access-zhm67\") pod \"redhat-operators-rzcm5\" (UID: \"eab91f0e-3c39-4096-8d91-329f7e9812e8\") " pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.818278 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eab91f0e-3c39-4096-8d91-329f7e9812e8-utilities\") pod \"redhat-operators-rzcm5\" (UID: \"eab91f0e-3c39-4096-8d91-329f7e9812e8\") " pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.818336 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eab91f0e-3c39-4096-8d91-329f7e9812e8-catalog-content\") pod \"redhat-operators-rzcm5\" (UID: \"eab91f0e-3c39-4096-8d91-329f7e9812e8\") " pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.820143 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eab91f0e-3c39-4096-8d91-329f7e9812e8-catalog-content\") pod \"redhat-operators-rzcm5\" (UID: \"eab91f0e-3c39-4096-8d91-329f7e9812e8\") " pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:12:31 crc kubenswrapper[5113]: E1212 14:12:31.820570 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:32.320548904 +0000 UTC m=+135.155798731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.840571 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhm67\" (UniqueName: \"kubernetes.io/projected/eab91f0e-3c39-4096-8d91-329f7e9812e8-kube-api-access-zhm67\") pod \"redhat-operators-rzcm5\" (UID: \"eab91f0e-3c39-4096-8d91-329f7e9812e8\") " pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.936591 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:31 crc kubenswrapper[5113]: E1212 14:12:31.936910 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:32.436886869 +0000 UTC m=+135.272136696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:31 crc kubenswrapper[5113]: I1212 14:12:31.941051 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.037820 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:32 crc kubenswrapper[5113]: E1212 14:12:32.038265 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:32.538247439 +0000 UTC m=+135.373497256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.139943 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:32 crc kubenswrapper[5113]: E1212 14:12:32.140588 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:32.640548633 +0000 UTC m=+135.475798460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.194997 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qwjt" event={"ID":"3128ee41-7d4c-448e-b5f1-8c73827104e9","Type":"ContainerStarted","Data":"b6f97e3b9e86e395c8c316a8a5c25d4eb21d6e872f6a8b5a4fafd50e793c954d"} Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.195065 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g9pnt"] Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.195097 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qwjt"] Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.195198 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94xtr"] Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.195217 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppmfs"] Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.195236 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mc7wk"] Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.195458 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.202180 5113 patch_prober.go:28] interesting pod/console-operator-67c89758df-f4974 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.202251 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-f4974" podUID="d4df8836-f797-48b9-905f-5790efb2e6af" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.212722 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-95skb" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.241433 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5bc6\" (UniqueName: \"kubernetes.io/projected/da7e4a58-73e9-4950-ba80-c7bbdac8d654-kube-api-access-b5bc6\") pod \"redhat-operators-g9pnt\" (UID: \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\") " pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.241549 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7e4a58-73e9-4950-ba80-c7bbdac8d654-catalog-content\") pod \"redhat-operators-g9pnt\" (UID: \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\") " pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.241664 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7e4a58-73e9-4950-ba80-c7bbdac8d654-utilities\") pod \"redhat-operators-g9pnt\" (UID: \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\") " pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.242233 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:32 crc kubenswrapper[5113]: E1212 14:12:32.266448 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:32.766424979 +0000 UTC m=+135.601674816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.342998 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.343794 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b5bc6\" (UniqueName: \"kubernetes.io/projected/da7e4a58-73e9-4950-ba80-c7bbdac8d654-kube-api-access-b5bc6\") pod \"redhat-operators-g9pnt\" (UID: \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\") " pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.343842 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7e4a58-73e9-4950-ba80-c7bbdac8d654-catalog-content\") pod \"redhat-operators-g9pnt\" (UID: \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\") " pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.343880 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7e4a58-73e9-4950-ba80-c7bbdac8d654-utilities\") pod \"redhat-operators-g9pnt\" (UID: \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\") " pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.344382 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7e4a58-73e9-4950-ba80-c7bbdac8d654-utilities\") pod \"redhat-operators-g9pnt\" (UID: \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\") " pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:12:32 crc kubenswrapper[5113]: E1212 14:12:32.344679 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:32.844452056 +0000 UTC m=+135.679701883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.345000 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7e4a58-73e9-4950-ba80-c7bbdac8d654-catalog-content\") pod \"redhat-operators-g9pnt\" (UID: \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\") " pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.418201 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5bc6\" (UniqueName: \"kubernetes.io/projected/da7e4a58-73e9-4950-ba80-c7bbdac8d654-kube-api-access-b5bc6\") pod \"redhat-operators-g9pnt\" (UID: \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\") " pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.448046 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:32 crc kubenswrapper[5113]: E1212 14:12:32.448664 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:32.948644618 +0000 UTC m=+135.783894445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.497210 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" event={"ID":"2a79044c-9f1f-4d59-8e63-e138868ebdd2","Type":"ContainerStarted","Data":"b48383189cfcad3caa7078822f053b63eb6999d58baff6df990dddd99ae820f6"} Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.516501 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppmfs" event={"ID":"2a2741c8-968d-4f77-8be2-35619f1b1f4d","Type":"ContainerStarted","Data":"3f21b0ae31c1042edeb6dc5bb326dbab1c7547525da060665d38bac7483ed7c4"} Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.552004 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:32 crc kubenswrapper[5113]: E1212 14:12:32.552481 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:33.052455266 +0000 UTC m=+135.887705113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.552558 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.553227 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:32 crc kubenswrapper[5113]: E1212 14:12:32.554556 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:33.05453774 +0000 UTC m=+135.889787567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.592265 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jvcjp" event={"ID":"357b225a-0c71-40ba-ac24-d769a9ff3f07","Type":"ContainerStarted","Data":"5ef1ff37161a90b5dcda49be709b409ee27fcc41f79ad86920a5c92524f690d8"} Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.626199 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xtr" event={"ID":"26ca03d9-d718-4adb-8c84-0386e98421c2","Type":"ContainerStarted","Data":"345c65b7d857446476b9e5d9693105fe3ad7e59c65fcca7431fb00a9d1fab41c"} Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.637282 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4" event={"ID":"0bac4af3-97cb-49b6-bd5a-c238ce8aefe0","Type":"ContainerStarted","Data":"2aa9ef9c986ff6253bc191e978d6c943244b6dedea4dda754730bdc8418c1931"} Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.654295 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mc7wk" event={"ID":"1599be11-4b1f-4016-b780-1f93afc71aad","Type":"ContainerStarted","Data":"cddd8b2db4311506e7c7671f746bb908e3b9edce6f9615086606616fb938c6e1"} Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.655450 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.656033 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" podStartSLOduration=114.656020815 podStartE2EDuration="1m54.656020815s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:32.592505477 +0000 UTC m=+135.427755324" watchObservedRunningTime="2025-12-12 14:12:32.656020815 +0000 UTC m=+135.491270642" Dec 12 14:12:32 crc kubenswrapper[5113]: E1212 14:12:32.656876 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:33.156858895 +0000 UTC m=+135.992108722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.712909 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vc9l4" podStartSLOduration=114.712888686 podStartE2EDuration="1m54.712888686s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:32.711514977 +0000 UTC m=+135.546764824" watchObservedRunningTime="2025-12-12 14:12:32.712888686 +0000 UTC m=+135.548138513" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.714524 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jvcjp" podStartSLOduration=114.714511214 podStartE2EDuration="1m54.714511214s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:32.654727389 +0000 UTC m=+135.489977246" watchObservedRunningTime="2025-12-12 14:12:32.714511214 +0000 UTC m=+135.549761051" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.742001 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rzcm5"] Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.760425 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.761508 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rlhjs" event={"ID":"74f90e1e-39f6-4433-9dd0-82e74f7b2b0f","Type":"ContainerStarted","Data":"cc113839bf262d705e6e3e02a319baed4e8e19a780d5c53e874bd379b65490a1"} Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.761590 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:32 crc kubenswrapper[5113]: E1212 14:12:32.771029 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:33.270999072 +0000 UTC m=+136.106248899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.776704 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=6.776633493 podStartE2EDuration="6.776633493s" podCreationTimestamp="2025-12-12 14:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:32.775058357 +0000 UTC m=+135.610308184" watchObservedRunningTime="2025-12-12 14:12:32.776633493 +0000 UTC m=+135.611883320" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.807158 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtkxn" event={"ID":"90beb70e-da16-489d-b0c8-3ced9d98deea","Type":"ContainerStarted","Data":"dbc797a03724888ddaa10351dd8f84898eda3d26537c77e2851beed2358a2eb5"} Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.862991 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:32 crc kubenswrapper[5113]: E1212 14:12:32.863439 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:33.363411592 +0000 UTC m=+136.198661419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.899311 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rlhjs" podStartSLOduration=18.899286844 podStartE2EDuration="18.899286844s" podCreationTimestamp="2025-12-12 14:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:32.828238707 +0000 UTC m=+135.663488564" watchObservedRunningTime="2025-12-12 14:12:32.899286844 +0000 UTC m=+135.734536671" Dec 12 14:12:32 crc kubenswrapper[5113]: I1212 14:12:32.976337 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:32 crc kubenswrapper[5113]: E1212 14:12:32.979079 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:33.479060403 +0000 UTC m=+136.314310280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.077760 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:33 crc kubenswrapper[5113]: E1212 14:12:33.078339 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:33.578304008 +0000 UTC m=+136.413553835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.086679 5113 ???:1] "http: TLS handshake error from 192.168.126.11:56678: no serving certificate available for the kubelet" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.163037 5113 ???:1] "http: TLS handshake error from 192.168.126.11:56688: no serving certificate available for the kubelet" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.179361 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:33 crc kubenswrapper[5113]: E1212 14:12:33.179745 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:33.679731921 +0000 UTC m=+136.514981748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.267779 5113 ???:1] "http: TLS handshake error from 192.168.126.11:56704: no serving certificate available for the kubelet" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.280342 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:33 crc kubenswrapper[5113]: E1212 14:12:33.280462 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:33.780434867 +0000 UTC m=+136.615684704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.280766 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:33 crc kubenswrapper[5113]: E1212 14:12:33.281596 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:33.781585439 +0000 UTC m=+136.616835266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.323770 5113 ???:1] "http: TLS handshake error from 192.168.126.11:56708: no serving certificate available for the kubelet" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.337221 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.383470 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c12c797-4410-4399-8ea4-138a53a8ef49-secret-volume\") pod \"1c12c797-4410-4399-8ea4-138a53a8ef49\" (UID: \"1c12c797-4410-4399-8ea4-138a53a8ef49\") " Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.383618 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.383661 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2cp6\" (UniqueName: \"kubernetes.io/projected/1c12c797-4410-4399-8ea4-138a53a8ef49-kube-api-access-b2cp6\") pod \"1c12c797-4410-4399-8ea4-138a53a8ef49\" (UID: \"1c12c797-4410-4399-8ea4-138a53a8ef49\") " Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.383766 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c12c797-4410-4399-8ea4-138a53a8ef49-config-volume\") pod \"1c12c797-4410-4399-8ea4-138a53a8ef49\" (UID: \"1c12c797-4410-4399-8ea4-138a53a8ef49\") " Dec 12 14:12:33 crc kubenswrapper[5113]: E1212 14:12:33.383792 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:33.883767138 +0000 UTC m=+136.719016965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.384189 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:33 crc kubenswrapper[5113]: E1212 14:12:33.384516 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:33.884506095 +0000 UTC m=+136.719755922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.386625 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c12c797-4410-4399-8ea4-138a53a8ef49-config-volume" (OuterVolumeSpecName: "config-volume") pod "1c12c797-4410-4399-8ea4-138a53a8ef49" (UID: "1c12c797-4410-4399-8ea4-138a53a8ef49"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.403067 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c12c797-4410-4399-8ea4-138a53a8ef49-kube-api-access-b2cp6" (OuterVolumeSpecName: "kube-api-access-b2cp6") pod "1c12c797-4410-4399-8ea4-138a53a8ef49" (UID: "1c12c797-4410-4399-8ea4-138a53a8ef49"). InnerVolumeSpecName "kube-api-access-b2cp6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.404834 5113 ???:1] "http: TLS handshake error from 192.168.126.11:56720: no serving certificate available for the kubelet" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.405400 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c12c797-4410-4399-8ea4-138a53a8ef49-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1c12c797-4410-4399-8ea4-138a53a8ef49" (UID: "1c12c797-4410-4399-8ea4-138a53a8ef49"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.485845 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:33 crc kubenswrapper[5113]: E1212 14:12:33.486156 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:33.986133295 +0000 UTC m=+136.821383132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.486159 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b2cp6\" (UniqueName: \"kubernetes.io/projected/1c12c797-4410-4399-8ea4-138a53a8ef49-kube-api-access-b2cp6\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.486194 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c12c797-4410-4399-8ea4-138a53a8ef49-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.486206 5113 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c12c797-4410-4399-8ea4-138a53a8ef49-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.518351 5113 ???:1] "http: TLS handshake error from 192.168.126.11:56722: no serving certificate available for the kubelet" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.587836 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:33 crc kubenswrapper[5113]: E1212 14:12:33.588374 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.088349725 +0000 UTC m=+136.923599622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.590280 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-f4974" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.608639 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g9pnt"] Dec 12 14:12:33 crc kubenswrapper[5113]: W1212 14:12:33.612093 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda7e4a58_73e9_4950_ba80_c7bbdac8d654.slice/crio-84349f8f03bae683f0a2645abb146993f20ba22f45546ae7520c235416837abb WatchSource:0}: Error finding container 84349f8f03bae683f0a2645abb146993f20ba22f45546ae7520c235416837abb: Status 404 returned error can't find the container with id 84349f8f03bae683f0a2645abb146993f20ba22f45546ae7520c235416837abb Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.689321 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:33 crc kubenswrapper[5113]: E1212 14:12:33.689865 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.189848491 +0000 UTC m=+137.025098308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.801992 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:33 crc kubenswrapper[5113]: E1212 14:12:33.802398 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.30237243 +0000 UTC m=+137.137622257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.810478 5113 ???:1] "http: TLS handshake error from 192.168.126.11:56736: no serving certificate available for the kubelet" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.857587 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" event={"ID":"1db49e24-69a6-47c8-b689-df0b1754efac","Type":"ContainerStarted","Data":"fd884c8c69107467c9f12cc39d3f2b5d7e8d085b9d4d8bcdb64679b66c2b6c94"} Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.861826 5113 generic.go:358] "Generic (PLEG): container finished" podID="1599be11-4b1f-4016-b780-1f93afc71aad" containerID="82e1dcba316aeccc72fc05489ea4719e23f672dce485f9ae5c511464f75cd59a" exitCode=0 Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.861930 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mc7wk" event={"ID":"1599be11-4b1f-4016-b780-1f93afc71aad","Type":"ContainerDied","Data":"82e1dcba316aeccc72fc05489ea4719e23f672dce485f9ae5c511464f75cd59a"} Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.866093 5113 generic.go:358] "Generic (PLEG): container finished" podID="def2dc04-1b99-4546-a73f-2f956f9527e1" containerID="3ad67b518bdab55e5eadb2d86bd7d62a456c47edf889c6e803168a69b01e26d3" exitCode=0 Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.866201 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"def2dc04-1b99-4546-a73f-2f956f9527e1","Type":"ContainerDied","Data":"3ad67b518bdab55e5eadb2d86bd7d62a456c47edf889c6e803168a69b01e26d3"} Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.875595 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-cmps6" podStartSLOduration=115.875573065 podStartE2EDuration="1m55.875573065s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:33.871064583 +0000 UTC m=+136.706314420" watchObservedRunningTime="2025-12-12 14:12:33.875573065 +0000 UTC m=+136.710822892" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.889680 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" event={"ID":"b75fe240-84aa-4d9f-be64-7f2727566095","Type":"ContainerStarted","Data":"f85aa074fcabaf129b5eff14d82cb8414ccee89c59bd8409d18d6a6b0a859f39"} Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.901964 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9pnt" event={"ID":"da7e4a58-73e9-4950-ba80-c7bbdac8d654","Type":"ContainerStarted","Data":"84349f8f03bae683f0a2645abb146993f20ba22f45546ae7520c235416837abb"} Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.903098 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:33 crc kubenswrapper[5113]: E1212 14:12:33.905373 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.405342968 +0000 UTC m=+137.240592795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.906916 5113 generic.go:358] "Generic (PLEG): container finished" podID="90beb70e-da16-489d-b0c8-3ced9d98deea" containerID="dbc797a03724888ddaa10351dd8f84898eda3d26537c77e2851beed2358a2eb5" exitCode=0 Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.906733 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtkxn" event={"ID":"90beb70e-da16-489d-b0c8-3ced9d98deea","Type":"ContainerDied","Data":"dbc797a03724888ddaa10351dd8f84898eda3d26537c77e2851beed2358a2eb5"} Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.909809 5113 generic.go:358] "Generic (PLEG): container finished" podID="2a2741c8-968d-4f77-8be2-35619f1b1f4d" containerID="24f8f30cd9cadfe14d5107c82dada9167184b02eb53b1adb21b5f05185512d60" exitCode=0 Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.909931 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppmfs" event={"ID":"2a2741c8-968d-4f77-8be2-35619f1b1f4d","Type":"ContainerDied","Data":"24f8f30cd9cadfe14d5107c82dada9167184b02eb53b1adb21b5f05185512d60"} Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.932872 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" event={"ID":"1c12c797-4410-4399-8ea4-138a53a8ef49","Type":"ContainerDied","Data":"c471f1a49b641b5ec5d84544096dfa838fe8f389c5aca1b12f4d5b1175b7d3e1"} Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.932927 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c471f1a49b641b5ec5d84544096dfa838fe8f389c5aca1b12f4d5b1175b7d3e1" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.933097 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425800-ks4pt" Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.940908 5113 generic.go:358] "Generic (PLEG): container finished" podID="3128ee41-7d4c-448e-b5f1-8c73827104e9" containerID="d3cfcd55a8d4d2e50c3daa5c14f768b48e376649d9760f1f29e56653d8a88711" exitCode=0 Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.941018 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qwjt" event={"ID":"3128ee41-7d4c-448e-b5f1-8c73827104e9","Type":"ContainerDied","Data":"d3cfcd55a8d4d2e50c3daa5c14f768b48e376649d9760f1f29e56653d8a88711"} Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.949531 5113 generic.go:358] "Generic (PLEG): container finished" podID="26ca03d9-d718-4adb-8c84-0386e98421c2" containerID="557167fb8245824e0ee046216245803952a094dfd0ac63e799a620ec898062b5" exitCode=0 Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.949699 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xtr" event={"ID":"26ca03d9-d718-4adb-8c84-0386e98421c2","Type":"ContainerDied","Data":"557167fb8245824e0ee046216245803952a094dfd0ac63e799a620ec898062b5"} Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.963753 5113 generic.go:358] "Generic (PLEG): container finished" podID="eab91f0e-3c39-4096-8d91-329f7e9812e8" containerID="c2034757802f2a8afa5c9e928b4a81beca9b0467a728c6e51d4b0d86680b8722" exitCode=0 Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.964060 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzcm5" event={"ID":"eab91f0e-3c39-4096-8d91-329f7e9812e8","Type":"ContainerDied","Data":"c2034757802f2a8afa5c9e928b4a81beca9b0467a728c6e51d4b0d86680b8722"} Dec 12 14:12:33 crc kubenswrapper[5113]: I1212 14:12:33.964141 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzcm5" event={"ID":"eab91f0e-3c39-4096-8d91-329f7e9812e8","Type":"ContainerStarted","Data":"e546fa3a1ca3527794335b06ebb0b6b3c6d6bd683629ad8fddd17757a3f94a90"} Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.007212 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:34 crc kubenswrapper[5113]: E1212 14:12:34.008496 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.508475901 +0000 UTC m=+137.343725728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.108840 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:34 crc kubenswrapper[5113]: E1212 14:12:34.109869 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.609852773 +0000 UTC m=+137.445102600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.211363 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:34 crc kubenswrapper[5113]: E1212 14:12:34.211721 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.71170642 +0000 UTC m=+137.546956247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.251442 5113 ???:1] "http: TLS handshake error from 192.168.126.11:56748: no serving certificate available for the kubelet" Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.312701 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:34 crc kubenswrapper[5113]: E1212 14:12:34.313120 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.813088991 +0000 UTC m=+137.648338818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.414161 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:34 crc kubenswrapper[5113]: E1212 14:12:34.415186 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:34.915154687 +0000 UTC m=+137.750404534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.515979 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:34 crc kubenswrapper[5113]: E1212 14:12:34.516211 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.016179336 +0000 UTC m=+137.851429163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.516355 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:34 crc kubenswrapper[5113]: E1212 14:12:34.516849 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.016831778 +0000 UTC m=+137.852081605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.618516 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:34 crc kubenswrapper[5113]: E1212 14:12:34.618638 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.118606763 +0000 UTC m=+137.953856590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.618807 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:34 crc kubenswrapper[5113]: E1212 14:12:34.619371 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.119360941 +0000 UTC m=+137.954610768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.720730 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:34 crc kubenswrapper[5113]: E1212 14:12:34.721069 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.221048503 +0000 UTC m=+138.056298330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.822931 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:34 crc kubenswrapper[5113]: E1212 14:12:34.823807 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.323790322 +0000 UTC m=+138.159040149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.912515 5113 ???:1] "http: TLS handshake error from 192.168.126.11:56750: no serving certificate available for the kubelet" Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.928614 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:34 crc kubenswrapper[5113]: E1212 14:12:34.928846 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.428822193 +0000 UTC m=+138.264072030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:34 crc kubenswrapper[5113]: I1212 14:12:34.930239 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:34 crc kubenswrapper[5113]: E1212 14:12:34.930468 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.430437851 +0000 UTC m=+138.265687678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.001985 5113 generic.go:358] "Generic (PLEG): container finished" podID="da7e4a58-73e9-4950-ba80-c7bbdac8d654" containerID="c41afacdf7e09cdcb0258fc846a3c6c365058645f3acd344c19f0b9cb44c93b2" exitCode=0 Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.003456 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9pnt" event={"ID":"da7e4a58-73e9-4950-ba80-c7bbdac8d654","Type":"ContainerDied","Data":"c41afacdf7e09cdcb0258fc846a3c6c365058645f3acd344c19f0b9cb44c93b2"} Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.032230 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:35 crc kubenswrapper[5113]: E1212 14:12:35.032679 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.532655182 +0000 UTC m=+138.367905019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.033072 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:35 crc kubenswrapper[5113]: E1212 14:12:35.034537 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.534523549 +0000 UTC m=+138.369773376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5113]: E1212 14:12:35.077924 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:35 crc kubenswrapper[5113]: E1212 14:12:35.091049 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:35 crc kubenswrapper[5113]: E1212 14:12:35.103894 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:35 crc kubenswrapper[5113]: E1212 14:12:35.103988 5113 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" podUID="5311c643-bfa2-4959-bc65-a6e4e4f5cd22" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.135381 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:35 crc kubenswrapper[5113]: E1212 14:12:35.135710 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.635688352 +0000 UTC m=+138.470938179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.237341 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:35 crc kubenswrapper[5113]: E1212 14:12:35.237693 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.737674845 +0000 UTC m=+138.572924672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.338634 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:35 crc kubenswrapper[5113]: E1212 14:12:35.338917 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.83889661 +0000 UTC m=+138.674146437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.452420 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:35 crc kubenswrapper[5113]: E1212 14:12:35.453062 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:35.953043967 +0000 UTC m=+138.788293794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.563460 5113 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.564562 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:35 crc kubenswrapper[5113]: E1212 14:12:35.564856 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.064839381 +0000 UTC m=+138.900089208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.565818 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.638262 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.638612 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.656779 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.665556 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/def2dc04-1b99-4546-a73f-2f956f9527e1-kube-api-access\") pod \"def2dc04-1b99-4546-a73f-2f956f9527e1\" (UID: \"def2dc04-1b99-4546-a73f-2f956f9527e1\") " Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.665627 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/def2dc04-1b99-4546-a73f-2f956f9527e1-kubelet-dir\") pod \"def2dc04-1b99-4546-a73f-2f956f9527e1\" (UID: \"def2dc04-1b99-4546-a73f-2f956f9527e1\") " Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.665834 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:35 crc kubenswrapper[5113]: E1212 14:12:35.666275 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 14:12:36.166255653 +0000 UTC m=+139.001505480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cd7rw" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.666486 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/def2dc04-1b99-4546-a73f-2f956f9527e1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "def2dc04-1b99-4546-a73f-2f956f9527e1" (UID: "def2dc04-1b99-4546-a73f-2f956f9527e1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.674935 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/def2dc04-1b99-4546-a73f-2f956f9527e1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "def2dc04-1b99-4546-a73f-2f956f9527e1" (UID: "def2dc04-1b99-4546-a73f-2f956f9527e1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.711019 5113 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-12T14:12:35.563498493Z","UUID":"7c2ecd6a-3676-41d2-84c7-316030174520","Handler":null,"Name":"","Endpoint":""} Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.716342 5113 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.716390 5113 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.767226 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.767880 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/def2dc04-1b99-4546-a73f-2f956f9527e1-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.767904 5113 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/def2dc04-1b99-4546-a73f-2f956f9527e1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.792874 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.869996 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.871673 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="def2dc04-1b99-4546-a73f-2f956f9527e1" containerName="pruner" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.871708 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="def2dc04-1b99-4546-a73f-2f956f9527e1" containerName="pruner" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.871731 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1c12c797-4410-4399-8ea4-138a53a8ef49" containerName="collect-profiles" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.871741 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c12c797-4410-4399-8ea4-138a53a8ef49" containerName="collect-profiles" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.871878 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="def2dc04-1b99-4546-a73f-2f956f9527e1" containerName="pruner" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.871897 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="1c12c797-4410-4399-8ea4-138a53a8ef49" containerName="collect-profiles" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.874867 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.879342 5113 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.879404 5113 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.916515 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 14:12:35 crc kubenswrapper[5113]: I1212 14:12:35.916749 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.015963 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.016833 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.053294 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cd7rw\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.055727 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"def2dc04-1b99-4546-a73f-2f956f9527e1","Type":"ContainerDied","Data":"be07cc88d0dedbe43dea9595cba58a5b9a9146cd41a061fc90fd900bc42cee22"} Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.055768 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be07cc88d0dedbe43dea9595cba58a5b9a9146cd41a061fc90fd900bc42cee22" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.055901 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.068163 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" event={"ID":"b75fe240-84aa-4d9f-be64-7f2727566095","Type":"ContainerStarted","Data":"065841bf85e74586b81203ea36d83a24b8fae9d508c4f4a123da53ef274cdd2e"} Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.074305 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jqdlj" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.264183 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c075aa3e-efea-4450-8735-8a9e76b0f236-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"c075aa3e-efea-4450-8735-8a9e76b0f236\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.265200 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c075aa3e-efea-4450-8735-8a9e76b0f236-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"c075aa3e-efea-4450-8735-8a9e76b0f236\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.268218 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.269268 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59654: no serving certificate available for the kubelet" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.273341 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.367025 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c075aa3e-efea-4450-8735-8a9e76b0f236-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"c075aa3e-efea-4450-8735-8a9e76b0f236\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.367180 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c075aa3e-efea-4450-8735-8a9e76b0f236-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"c075aa3e-efea-4450-8735-8a9e76b0f236\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.367830 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c075aa3e-efea-4450-8735-8a9e76b0f236-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"c075aa3e-efea-4450-8735-8a9e76b0f236\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.403796 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c075aa3e-efea-4450-8735-8a9e76b0f236-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"c075aa3e-efea-4450-8735-8a9e76b0f236\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:36 crc kubenswrapper[5113]: I1212 14:12:36.757883 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:37 crc kubenswrapper[5113]: I1212 14:12:37.162776 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" event={"ID":"b75fe240-84aa-4d9f-be64-7f2727566095","Type":"ContainerStarted","Data":"186d384bcf144115b012a0ae71158136ca9973029d385f00dd35a0eecc3ebe5e"} Dec 12 14:12:37 crc kubenswrapper[5113]: I1212 14:12:37.236529 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-cd7rw"] Dec 12 14:12:37 crc kubenswrapper[5113]: I1212 14:12:37.339017 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 14:12:37 crc kubenswrapper[5113]: W1212 14:12:37.356090 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc075aa3e_efea_4450_8735_8a9e76b0f236.slice/crio-41e3524f342bb56832d13dc959f8a0878f95f2b940524fa38d0903304e3af68a WatchSource:0}: Error finding container 41e3524f342bb56832d13dc959f8a0878f95f2b940524fa38d0903304e3af68a: Status 404 returned error can't find the container with id 41e3524f342bb56832d13dc959f8a0878f95f2b940524fa38d0903304e3af68a Dec 12 14:12:37 crc kubenswrapper[5113]: I1212 14:12:37.553938 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 12 14:12:38 crc kubenswrapper[5113]: I1212 14:12:38.280651 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" event={"ID":"b75fe240-84aa-4d9f-be64-7f2727566095","Type":"ContainerStarted","Data":"7cbf85ac4cd19dd2a54e5cd4a38a533b0871dedced4124c00b080a05e5e22fbf"} Dec 12 14:12:38 crc kubenswrapper[5113]: I1212 14:12:38.283932 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"c075aa3e-efea-4450-8735-8a9e76b0f236","Type":"ContainerStarted","Data":"41e3524f342bb56832d13dc959f8a0878f95f2b940524fa38d0903304e3af68a"} Dec 12 14:12:38 crc kubenswrapper[5113]: I1212 14:12:38.286060 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" event={"ID":"87300cd0-fd46-44e7-9925-c8cf3322b686","Type":"ContainerStarted","Data":"a9a122761c42506f7ffc19b69805169cf2b3446332b21cdf0609ac308a7dd663"} Dec 12 14:12:38 crc kubenswrapper[5113]: I1212 14:12:38.304449 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:12:38 crc kubenswrapper[5113]: I1212 14:12:38.312364 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qdlcg" podStartSLOduration=24.312346676 podStartE2EDuration="24.312346676s" podCreationTimestamp="2025-12-12 14:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:38.311870219 +0000 UTC m=+141.147120076" watchObservedRunningTime="2025-12-12 14:12:38.312346676 +0000 UTC m=+141.147596503" Dec 12 14:12:38 crc kubenswrapper[5113]: I1212 14:12:38.879275 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59670: no serving certificate available for the kubelet" Dec 12 14:12:39 crc kubenswrapper[5113]: I1212 14:12:39.297366 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"c075aa3e-efea-4450-8735-8a9e76b0f236","Type":"ContainerStarted","Data":"5c548a8d1f735e32caff546f570dcc1ff17942d562b0a516230cdf6c6dcfaebf"} Dec 12 14:12:39 crc kubenswrapper[5113]: I1212 14:12:39.301339 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" event={"ID":"87300cd0-fd46-44e7-9925-c8cf3322b686","Type":"ContainerStarted","Data":"b5d43bab7023565341d21d43e30dd246743af82b2da82fcdb4c8ea226c1b15bd"} Dec 12 14:12:39 crc kubenswrapper[5113]: I1212 14:12:39.301567 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:12:39 crc kubenswrapper[5113]: I1212 14:12:39.317597 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=4.317576171 podStartE2EDuration="4.317576171s" podCreationTimestamp="2025-12-12 14:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.314278853 +0000 UTC m=+142.149528690" watchObservedRunningTime="2025-12-12 14:12:39.317576171 +0000 UTC m=+142.152825998" Dec 12 14:12:39 crc kubenswrapper[5113]: I1212 14:12:39.348476 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" podStartSLOduration=121.348450854 podStartE2EDuration="2m1.348450854s" podCreationTimestamp="2025-12-12 14:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:12:39.345892722 +0000 UTC m=+142.181142569" watchObservedRunningTime="2025-12-12 14:12:39.348450854 +0000 UTC m=+142.183700681" Dec 12 14:12:39 crc kubenswrapper[5113]: I1212 14:12:39.970552 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:12:39 crc kubenswrapper[5113]: I1212 14:12:39.970674 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:12:39 crc kubenswrapper[5113]: I1212 14:12:39.979278 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-7skgn" Dec 12 14:12:39 crc kubenswrapper[5113]: I1212 14:12:39.981735 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rlhjs" Dec 12 14:12:40 crc kubenswrapper[5113]: I1212 14:12:40.315212 5113 generic.go:358] "Generic (PLEG): container finished" podID="c075aa3e-efea-4450-8735-8a9e76b0f236" containerID="5c548a8d1f735e32caff546f570dcc1ff17942d562b0a516230cdf6c6dcfaebf" exitCode=0 Dec 12 14:12:40 crc kubenswrapper[5113]: I1212 14:12:40.316451 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"c075aa3e-efea-4450-8735-8a9e76b0f236","Type":"ContainerDied","Data":"5c548a8d1f735e32caff546f570dcc1ff17942d562b0a516230cdf6c6dcfaebf"} Dec 12 14:12:40 crc kubenswrapper[5113]: I1212 14:12:40.610841 5113 patch_prober.go:28] interesting pod/console-64d44f6ddf-zx9gm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 12 14:12:40 crc kubenswrapper[5113]: I1212 14:12:40.610954 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-zx9gm" podUID="edd2186b-b29e-49dd-8b4f-ed2081fac2d4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 12 14:12:40 crc kubenswrapper[5113]: I1212 14:12:40.728568 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:12:40 crc kubenswrapper[5113]: I1212 14:12:40.728657 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:12:44 crc kubenswrapper[5113]: I1212 14:12:44.030336 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59674: no serving certificate available for the kubelet" Dec 12 14:12:44 crc kubenswrapper[5113]: I1212 14:12:44.045249 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:44 crc kubenswrapper[5113]: I1212 14:12:44.107636 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c075aa3e-efea-4450-8735-8a9e76b0f236-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c075aa3e-efea-4450-8735-8a9e76b0f236" (UID: "c075aa3e-efea-4450-8735-8a9e76b0f236"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:12:44 crc kubenswrapper[5113]: I1212 14:12:44.107691 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c075aa3e-efea-4450-8735-8a9e76b0f236-kubelet-dir\") pod \"c075aa3e-efea-4450-8735-8a9e76b0f236\" (UID: \"c075aa3e-efea-4450-8735-8a9e76b0f236\") " Dec 12 14:12:44 crc kubenswrapper[5113]: I1212 14:12:44.107918 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c075aa3e-efea-4450-8735-8a9e76b0f236-kube-api-access\") pod \"c075aa3e-efea-4450-8735-8a9e76b0f236\" (UID: \"c075aa3e-efea-4450-8735-8a9e76b0f236\") " Dec 12 14:12:44 crc kubenswrapper[5113]: I1212 14:12:44.108204 5113 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c075aa3e-efea-4450-8735-8a9e76b0f236-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:44 crc kubenswrapper[5113]: I1212 14:12:44.129498 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c075aa3e-efea-4450-8735-8a9e76b0f236-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c075aa3e-efea-4450-8735-8a9e76b0f236" (UID: "c075aa3e-efea-4450-8735-8a9e76b0f236"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:12:44 crc kubenswrapper[5113]: I1212 14:12:44.210015 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c075aa3e-efea-4450-8735-8a9e76b0f236-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:12:44 crc kubenswrapper[5113]: I1212 14:12:44.347011 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"c075aa3e-efea-4450-8735-8a9e76b0f236","Type":"ContainerDied","Data":"41e3524f342bb56832d13dc959f8a0878f95f2b940524fa38d0903304e3af68a"} Dec 12 14:12:44 crc kubenswrapper[5113]: I1212 14:12:44.347072 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41e3524f342bb56832d13dc959f8a0878f95f2b940524fa38d0903304e3af68a" Dec 12 14:12:44 crc kubenswrapper[5113]: I1212 14:12:44.347170 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 14:12:45 crc kubenswrapper[5113]: E1212 14:12:45.063277 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:45 crc kubenswrapper[5113]: E1212 14:12:45.066237 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:45 crc kubenswrapper[5113]: E1212 14:12:45.067880 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:45 crc kubenswrapper[5113]: E1212 14:12:45.067994 5113 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" podUID="5311c643-bfa2-4959-bc65-a6e4e4f5cd22" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 14:12:49 crc kubenswrapper[5113]: I1212 14:12:49.970636 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:12:49 crc kubenswrapper[5113]: I1212 14:12:49.970947 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:12:50 crc kubenswrapper[5113]: I1212 14:12:50.604787 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:50 crc kubenswrapper[5113]: I1212 14:12:50.609767 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-zx9gm" Dec 12 14:12:50 crc kubenswrapper[5113]: I1212 14:12:50.761780 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:12:50 crc kubenswrapper[5113]: I1212 14:12:50.761850 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:12:50 crc kubenswrapper[5113]: I1212 14:12:50.761913 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-wnqfw" Dec 12 14:12:50 crc kubenswrapper[5113]: I1212 14:12:50.763221 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:12:50 crc kubenswrapper[5113]: I1212 14:12:50.763549 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:12:50 crc kubenswrapper[5113]: I1212 14:12:50.765325 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"4ea4965a434bdd97000eeb4b5c0717f710b79bbb2c9cca89c6af0dfc9345ba13"} pod="openshift-console/downloads-747b44746d-wnqfw" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 12 14:12:50 crc kubenswrapper[5113]: I1212 14:12:50.765664 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" containerID="cri-o://4ea4965a434bdd97000eeb4b5c0717f710b79bbb2c9cca89c6af0dfc9345ba13" gracePeriod=2 Dec 12 14:12:52 crc kubenswrapper[5113]: I1212 14:12:52.389298 5113 generic.go:358] "Generic (PLEG): container finished" podID="4723fa2f-a114-4d27-875f-951678d39dde" containerID="4ea4965a434bdd97000eeb4b5c0717f710b79bbb2c9cca89c6af0dfc9345ba13" exitCode=0 Dec 12 14:12:52 crc kubenswrapper[5113]: I1212 14:12:52.389433 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-wnqfw" event={"ID":"4723fa2f-a114-4d27-875f-951678d39dde","Type":"ContainerDied","Data":"4ea4965a434bdd97000eeb4b5c0717f710b79bbb2c9cca89c6af0dfc9345ba13"} Dec 12 14:12:54 crc kubenswrapper[5113]: I1212 14:12:54.299835 5113 ???:1] "http: TLS handshake error from 192.168.126.11:50900: no serving certificate available for the kubelet" Dec 12 14:12:55 crc kubenswrapper[5113]: E1212 14:12:55.063741 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:55 crc kubenswrapper[5113]: E1212 14:12:55.065911 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:55 crc kubenswrapper[5113]: E1212 14:12:55.067430 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:12:55 crc kubenswrapper[5113]: E1212 14:12:55.067514 5113 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" podUID="5311c643-bfa2-4959-bc65-a6e4e4f5cd22" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 14:13:00 crc kubenswrapper[5113]: I1212 14:13:00.319953 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:13:00 crc kubenswrapper[5113]: I1212 14:13:00.430636 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rlsj9_5311c643-bfa2-4959-bc65-a6e4e4f5cd22/kube-multus-additional-cni-plugins/0.log" Dec 12 14:13:00 crc kubenswrapper[5113]: I1212 14:13:00.430960 5113 generic.go:358] "Generic (PLEG): container finished" podID="5311c643-bfa2-4959-bc65-a6e4e4f5cd22" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" exitCode=137 Dec 12 14:13:00 crc kubenswrapper[5113]: I1212 14:13:00.431266 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" event={"ID":"5311c643-bfa2-4959-bc65-a6e4e4f5cd22","Type":"ContainerDied","Data":"75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915"} Dec 12 14:13:00 crc kubenswrapper[5113]: I1212 14:13:00.764095 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:13:00 crc kubenswrapper[5113]: I1212 14:13:00.764177 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:13:02 crc kubenswrapper[5113]: I1212 14:13:02.204334 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-q2zlc" Dec 12 14:13:02 crc kubenswrapper[5113]: I1212 14:13:02.816924 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 14:13:05 crc kubenswrapper[5113]: E1212 14:13:05.060639 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915 is running failed: container process not found" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:13:05 crc kubenswrapper[5113]: E1212 14:13:05.063212 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915 is running failed: container process not found" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:13:05 crc kubenswrapper[5113]: E1212 14:13:05.063472 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915 is running failed: container process not found" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:13:05 crc kubenswrapper[5113]: E1212 14:13:05.063517 5113 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" podUID="5311c643-bfa2-4959-bc65-a6e4e4f5cd22" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.230162 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.231588 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c075aa3e-efea-4450-8735-8a9e76b0f236" containerName="pruner" Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.231640 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="c075aa3e-efea-4450-8735-8a9e76b0f236" containerName="pruner" Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.231788 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="c075aa3e-efea-4450-8735-8a9e76b0f236" containerName="pruner" Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.241005 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.241186 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.243801 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.244269 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.340851 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/242308b9-716d-4df6-b38a-412f6c34c561-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"242308b9-716d-4df6-b38a-412f6c34c561\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.341136 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/242308b9-716d-4df6-b38a-412f6c34c561-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"242308b9-716d-4df6-b38a-412f6c34c561\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.450222 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/242308b9-716d-4df6-b38a-412f6c34c561-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"242308b9-716d-4df6-b38a-412f6c34c561\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.450340 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/242308b9-716d-4df6-b38a-412f6c34c561-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"242308b9-716d-4df6-b38a-412f6c34c561\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.450430 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/242308b9-716d-4df6-b38a-412f6c34c561-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"242308b9-716d-4df6-b38a-412f6c34c561\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.481591 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/242308b9-716d-4df6-b38a-412f6c34c561-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"242308b9-716d-4df6-b38a-412f6c34c561\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:06 crc kubenswrapper[5113]: I1212 14:13:06.560322 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:10 crc kubenswrapper[5113]: I1212 14:13:10.764871 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:13:10 crc kubenswrapper[5113]: I1212 14:13:10.765278 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:13:11 crc kubenswrapper[5113]: I1212 14:13:11.034471 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 14:13:11 crc kubenswrapper[5113]: I1212 14:13:11.213155 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 14:13:11 crc kubenswrapper[5113]: I1212 14:13:11.213319 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:11 crc kubenswrapper[5113]: I1212 14:13:11.270624 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a11e3c14-75a9-4f84-b320-13b8f4cd509e-kube-api-access\") pod \"installer-12-crc\" (UID: \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:11 crc kubenswrapper[5113]: I1212 14:13:11.270952 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a11e3c14-75a9-4f84-b320-13b8f4cd509e-var-lock\") pod \"installer-12-crc\" (UID: \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:11 crc kubenswrapper[5113]: I1212 14:13:11.271146 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a11e3c14-75a9-4f84-b320-13b8f4cd509e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:11 crc kubenswrapper[5113]: I1212 14:13:11.372229 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a11e3c14-75a9-4f84-b320-13b8f4cd509e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:11 crc kubenswrapper[5113]: I1212 14:13:11.372836 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a11e3c14-75a9-4f84-b320-13b8f4cd509e-kube-api-access\") pod \"installer-12-crc\" (UID: \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:11 crc kubenswrapper[5113]: I1212 14:13:11.372469 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a11e3c14-75a9-4f84-b320-13b8f4cd509e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:11 crc kubenswrapper[5113]: I1212 14:13:11.372949 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a11e3c14-75a9-4f84-b320-13b8f4cd509e-var-lock\") pod \"installer-12-crc\" (UID: \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:11 crc kubenswrapper[5113]: I1212 14:13:11.373111 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a11e3c14-75a9-4f84-b320-13b8f4cd509e-var-lock\") pod \"installer-12-crc\" (UID: \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:11 crc kubenswrapper[5113]: I1212 14:13:11.488386 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a11e3c14-75a9-4f84-b320-13b8f4cd509e-kube-api-access\") pod \"installer-12-crc\" (UID: \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:11 crc kubenswrapper[5113]: I1212 14:13:11.552166 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:14 crc kubenswrapper[5113]: I1212 14:13:14.802768 5113 ???:1] "http: TLS handshake error from 192.168.126.11:56350: no serving certificate available for the kubelet" Dec 12 14:13:15 crc kubenswrapper[5113]: E1212 14:13:15.060310 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915 is running failed: container process not found" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:13:15 crc kubenswrapper[5113]: E1212 14:13:15.060632 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915 is running failed: container process not found" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:13:15 crc kubenswrapper[5113]: E1212 14:13:15.062113 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915 is running failed: container process not found" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 14:13:15 crc kubenswrapper[5113]: E1212 14:13:15.062349 5113 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" podUID="5311c643-bfa2-4959-bc65-a6e4e4f5cd22" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.590278 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rlsj9_5311c643-bfa2-4959-bc65-a6e4e4f5cd22/kube-multus-additional-cni-plugins/0.log" Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.590607 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" event={"ID":"5311c643-bfa2-4959-bc65-a6e4e4f5cd22","Type":"ContainerDied","Data":"461f76e8884b324562582d1cc3f70afb624cea0bb057e8bbdf4ffe049e596bcd"} Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.590645 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="461f76e8884b324562582d1cc3f70afb624cea0bb057e8bbdf4ffe049e596bcd" Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.590827 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rlsj9_5311c643-bfa2-4959-bc65-a6e4e4f5cd22/kube-multus-additional-cni-plugins/0.log" Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.590909 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.674981 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-cni-sysctl-allowlist\") pod \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.675065 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhjxh\" (UniqueName: \"kubernetes.io/projected/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-kube-api-access-rhjxh\") pod \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.675160 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-ready\") pod \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.675221 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-tuning-conf-dir\") pod \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\" (UID: \"5311c643-bfa2-4959-bc65-a6e4e4f5cd22\") " Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.675494 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "5311c643-bfa2-4959-bc65-a6e4e4f5cd22" (UID: "5311c643-bfa2-4959-bc65-a6e4e4f5cd22"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.676172 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "5311c643-bfa2-4959-bc65-a6e4e4f5cd22" (UID: "5311c643-bfa2-4959-bc65-a6e4e4f5cd22"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.676356 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-ready" (OuterVolumeSpecName: "ready") pod "5311c643-bfa2-4959-bc65-a6e4e4f5cd22" (UID: "5311c643-bfa2-4959-bc65-a6e4e4f5cd22"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.735919 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-kube-api-access-rhjxh" (OuterVolumeSpecName: "kube-api-access-rhjxh") pod "5311c643-bfa2-4959-bc65-a6e4e4f5cd22" (UID: "5311c643-bfa2-4959-bc65-a6e4e4f5cd22"). InnerVolumeSpecName "kube-api-access-rhjxh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.777691 5113 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.777734 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rhjxh\" (UniqueName: \"kubernetes.io/projected/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-kube-api-access-rhjxh\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.777746 5113 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-ready\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:15 crc kubenswrapper[5113]: I1212 14:13:15.777758 5113 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5311c643-bfa2-4959-bc65-a6e4e4f5cd22-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.041477 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.064092 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.636828 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9pnt" event={"ID":"da7e4a58-73e9-4950-ba80-c7bbdac8d654","Type":"ContainerStarted","Data":"fa09f1bce15ef091690c85e88ce54f20557ae077fbd4fbd3a9fd66edd3cf2a6d"} Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.649716 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a11e3c14-75a9-4f84-b320-13b8f4cd509e","Type":"ContainerStarted","Data":"a252a93945dbd7fd040339361361d88bfcda1f27c3ee4e2c5d8cf3cf52c0fed1"} Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.683507 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtkxn" event={"ID":"90beb70e-da16-489d-b0c8-3ced9d98deea","Type":"ContainerStarted","Data":"4b83e0198c131bc381e50856316d7406034e678893a5c7b30dba36dead8fba33"} Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.707093 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8lq" event={"ID":"7aefb209-096f-4d97-bbde-df22378e9c13","Type":"ContainerStarted","Data":"c8c8fc8fc16b07bc09ec467f02574390e15db7a439aa4c6ebf5010b9f97d9fe6"} Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.720591 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"242308b9-716d-4df6-b38a-412f6c34c561","Type":"ContainerStarted","Data":"1f0182a48bd3a1504d1a914fee2ce33150f24d340004cfd6ab535abb809b13ee"} Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.733585 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppmfs" event={"ID":"2a2741c8-968d-4f77-8be2-35619f1b1f4d","Type":"ContainerStarted","Data":"9b3e8977e14fcffca80d8396e99649f6c805c6b160bafd81697572ccfc1fc69e"} Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.741240 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-wnqfw" event={"ID":"4723fa2f-a114-4d27-875f-951678d39dde","Type":"ContainerStarted","Data":"143ab3abeb1bc5193635b5edbbd1e8aac071b5099802535f2795e345b0586c7e"} Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.741717 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-wnqfw" Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.741772 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.741804 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.751668 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qwjt" event={"ID":"3128ee41-7d4c-448e-b5f1-8c73827104e9","Type":"ContainerStarted","Data":"8289d9f56a7fb449b02acee3365d6e8103cfa637ade1733dbb9441b95363132e"} Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.760293 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xtr" event={"ID":"26ca03d9-d718-4adb-8c84-0386e98421c2","Type":"ContainerStarted","Data":"a2910b02aab40413d4a0677e459c8516faba51a06b0a09268c18a925d975100a"} Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.763338 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzcm5" event={"ID":"eab91f0e-3c39-4096-8d91-329f7e9812e8","Type":"ContainerStarted","Data":"01ce2823f9f94fcb385b48f282f113e519372b3183ee51df207f46f51c06e497"} Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.765673 5113 generic.go:358] "Generic (PLEG): container finished" podID="1599be11-4b1f-4016-b780-1f93afc71aad" containerID="d4503e0223348ba24b7dbd4f2494764cfaafef453fff1c4f144270fe619ee60c" exitCode=0 Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.765864 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:13:16 crc kubenswrapper[5113]: I1212 14:13:16.770484 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mc7wk" event={"ID":"1599be11-4b1f-4016-b780-1f93afc71aad","Type":"ContainerDied","Data":"d4503e0223348ba24b7dbd4f2494764cfaafef453fff1c4f144270fe619ee60c"} Dec 12 14:13:17 crc kubenswrapper[5113]: I1212 14:13:17.869469 5113 generic.go:358] "Generic (PLEG): container finished" podID="90beb70e-da16-489d-b0c8-3ced9d98deea" containerID="4b83e0198c131bc381e50856316d7406034e678893a5c7b30dba36dead8fba33" exitCode=0 Dec 12 14:13:17 crc kubenswrapper[5113]: I1212 14:13:17.869763 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtkxn" event={"ID":"90beb70e-da16-489d-b0c8-3ced9d98deea","Type":"ContainerDied","Data":"4b83e0198c131bc381e50856316d7406034e678893a5c7b30dba36dead8fba33"} Dec 12 14:13:17 crc kubenswrapper[5113]: I1212 14:13:17.874566 5113 generic.go:358] "Generic (PLEG): container finished" podID="7aefb209-096f-4d97-bbde-df22378e9c13" containerID="c8c8fc8fc16b07bc09ec467f02574390e15db7a439aa4c6ebf5010b9f97d9fe6" exitCode=0 Dec 12 14:13:17 crc kubenswrapper[5113]: I1212 14:13:17.874834 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8lq" event={"ID":"7aefb209-096f-4d97-bbde-df22378e9c13","Type":"ContainerDied","Data":"c8c8fc8fc16b07bc09ec467f02574390e15db7a439aa4c6ebf5010b9f97d9fe6"} Dec 12 14:13:17 crc kubenswrapper[5113]: I1212 14:13:17.890369 5113 generic.go:358] "Generic (PLEG): container finished" podID="2a2741c8-968d-4f77-8be2-35619f1b1f4d" containerID="9b3e8977e14fcffca80d8396e99649f6c805c6b160bafd81697572ccfc1fc69e" exitCode=0 Dec 12 14:13:17 crc kubenswrapper[5113]: I1212 14:13:17.890647 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppmfs" event={"ID":"2a2741c8-968d-4f77-8be2-35619f1b1f4d","Type":"ContainerDied","Data":"9b3e8977e14fcffca80d8396e99649f6c805c6b160bafd81697572ccfc1fc69e"} Dec 12 14:13:17 crc kubenswrapper[5113]: I1212 14:13:17.910226 5113 generic.go:358] "Generic (PLEG): container finished" podID="3128ee41-7d4c-448e-b5f1-8c73827104e9" containerID="8289d9f56a7fb449b02acee3365d6e8103cfa637ade1733dbb9441b95363132e" exitCode=0 Dec 12 14:13:17 crc kubenswrapper[5113]: I1212 14:13:17.910421 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qwjt" event={"ID":"3128ee41-7d4c-448e-b5f1-8c73827104e9","Type":"ContainerDied","Data":"8289d9f56a7fb449b02acee3365d6e8103cfa637ade1733dbb9441b95363132e"} Dec 12 14:13:17 crc kubenswrapper[5113]: I1212 14:13:17.924151 5113 generic.go:358] "Generic (PLEG): container finished" podID="26ca03d9-d718-4adb-8c84-0386e98421c2" containerID="a2910b02aab40413d4a0677e459c8516faba51a06b0a09268c18a925d975100a" exitCode=0 Dec 12 14:13:17 crc kubenswrapper[5113]: I1212 14:13:17.924206 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xtr" event={"ID":"26ca03d9-d718-4adb-8c84-0386e98421c2","Type":"ContainerDied","Data":"a2910b02aab40413d4a0677e459c8516faba51a06b0a09268c18a925d975100a"} Dec 12 14:13:17 crc kubenswrapper[5113]: I1212 14:13:17.926322 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:13:17 crc kubenswrapper[5113]: I1212 14:13:17.926373 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:13:18 crc kubenswrapper[5113]: I1212 14:13:18.931240 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppmfs" event={"ID":"2a2741c8-968d-4f77-8be2-35619f1b1f4d","Type":"ContainerStarted","Data":"685dc44faab54c18fa0de2fc7a49b9453cbef7bed5e68a0546135d7457c24b29"} Dec 12 14:13:18 crc kubenswrapper[5113]: I1212 14:13:18.935629 5113 generic.go:358] "Generic (PLEG): container finished" podID="eab91f0e-3c39-4096-8d91-329f7e9812e8" containerID="01ce2823f9f94fcb385b48f282f113e519372b3183ee51df207f46f51c06e497" exitCode=0 Dec 12 14:13:18 crc kubenswrapper[5113]: I1212 14:13:18.935942 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzcm5" event={"ID":"eab91f0e-3c39-4096-8d91-329f7e9812e8","Type":"ContainerDied","Data":"01ce2823f9f94fcb385b48f282f113e519372b3183ee51df207f46f51c06e497"} Dec 12 14:13:18 crc kubenswrapper[5113]: I1212 14:13:18.943585 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mc7wk" event={"ID":"1599be11-4b1f-4016-b780-1f93afc71aad","Type":"ContainerStarted","Data":"52d19997fa3cb59e59f4fee390700d36334c59ffb9a8e930ca96989d84317034"} Dec 12 14:13:18 crc kubenswrapper[5113]: I1212 14:13:18.944058 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:13:18 crc kubenswrapper[5113]: I1212 14:13:18.944241 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:13:18 crc kubenswrapper[5113]: I1212 14:13:18.962814 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ppmfs" podStartSLOduration=9.302643791 podStartE2EDuration="50.962779647s" podCreationTimestamp="2025-12-12 14:12:28 +0000 UTC" firstStartedPulling="2025-12-12 14:12:33.91742672 +0000 UTC m=+136.752676547" lastFinishedPulling="2025-12-12 14:13:15.577562576 +0000 UTC m=+178.412812403" observedRunningTime="2025-12-12 14:13:18.95724291 +0000 UTC m=+181.792492757" watchObservedRunningTime="2025-12-12 14:13:18.962779647 +0000 UTC m=+181.798029474" Dec 12 14:13:19 crc kubenswrapper[5113]: I1212 14:13:19.949883 5113 generic.go:358] "Generic (PLEG): container finished" podID="da7e4a58-73e9-4950-ba80-c7bbdac8d654" containerID="fa09f1bce15ef091690c85e88ce54f20557ae077fbd4fbd3a9fd66edd3cf2a6d" exitCode=0 Dec 12 14:13:19 crc kubenswrapper[5113]: I1212 14:13:19.950256 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9pnt" event={"ID":"da7e4a58-73e9-4950-ba80-c7bbdac8d654","Type":"ContainerDied","Data":"fa09f1bce15ef091690c85e88ce54f20557ae077fbd4fbd3a9fd66edd3cf2a6d"} Dec 12 14:13:19 crc kubenswrapper[5113]: I1212 14:13:19.960464 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a11e3c14-75a9-4f84-b320-13b8f4cd509e","Type":"ContainerStarted","Data":"3f0115cb4749d823ebfa90072687842a09de9bb7ce9d4d5c5c084ba88781c9f4"} Dec 12 14:13:19 crc kubenswrapper[5113]: I1212 14:13:19.964169 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtkxn" event={"ID":"90beb70e-da16-489d-b0c8-3ced9d98deea","Type":"ContainerStarted","Data":"ea446cf8240020f64b053a7f202913cc32090ba323f88bb450cd10268fd6390f"} Dec 12 14:13:19 crc kubenswrapper[5113]: I1212 14:13:19.967220 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8lq" event={"ID":"7aefb209-096f-4d97-bbde-df22378e9c13","Type":"ContainerStarted","Data":"f56fb335b7c839cfa4be9a9d5149e3549d706c4dfd372dd8780bd94d26dd6c7a"} Dec 12 14:13:19 crc kubenswrapper[5113]: I1212 14:13:19.969327 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"242308b9-716d-4df6-b38a-412f6c34c561","Type":"ContainerStarted","Data":"b0f342f06699e33699d3a0368497b33c05c4d6ddabf2fe996b14087297105325"} Dec 12 14:13:19 crc kubenswrapper[5113]: I1212 14:13:19.972682 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qwjt" event={"ID":"3128ee41-7d4c-448e-b5f1-8c73827104e9","Type":"ContainerStarted","Data":"d90acab17b3fefef26c68bbfc8b43c61c30ba26b6a72091257b05669dc28518b"} Dec 12 14:13:19 crc kubenswrapper[5113]: I1212 14:13:19.975493 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xtr" event={"ID":"26ca03d9-d718-4adb-8c84-0386e98421c2","Type":"ContainerStarted","Data":"e5d6291ef382c2a1b81963fb24cf3af26c3aebba102af936cd0a3952359ac070"} Dec 12 14:13:19 crc kubenswrapper[5113]: I1212 14:13:19.983488 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mc7wk" podStartSLOduration=10.295152211 podStartE2EDuration="51.983461443s" podCreationTimestamp="2025-12-12 14:12:28 +0000 UTC" firstStartedPulling="2025-12-12 14:12:33.862875261 +0000 UTC m=+136.698125088" lastFinishedPulling="2025-12-12 14:13:15.551184493 +0000 UTC m=+178.386434320" observedRunningTime="2025-12-12 14:13:19.005228213 +0000 UTC m=+181.840478050" watchObservedRunningTime="2025-12-12 14:13:19.983461443 +0000 UTC m=+182.818711270" Dec 12 14:13:20 crc kubenswrapper[5113]: I1212 14:13:20.026249 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=14.026228831 podStartE2EDuration="14.026228831s" podCreationTimestamp="2025-12-12 14:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:13:20.022911373 +0000 UTC m=+182.858161220" watchObservedRunningTime="2025-12-12 14:13:20.026228831 +0000 UTC m=+182.861478668" Dec 12 14:13:20 crc kubenswrapper[5113]: I1212 14:13:20.042771 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gtkxn" podStartSLOduration=11.297297589 podStartE2EDuration="54.04273527s" podCreationTimestamp="2025-12-12 14:12:26 +0000 UTC" firstStartedPulling="2025-12-12 14:12:32.806604813 +0000 UTC m=+135.641854640" lastFinishedPulling="2025-12-12 14:13:15.552042494 +0000 UTC m=+178.387292321" observedRunningTime="2025-12-12 14:13:20.040542472 +0000 UTC m=+182.875792309" watchObservedRunningTime="2025-12-12 14:13:20.04273527 +0000 UTC m=+182.877985097" Dec 12 14:13:20 crc kubenswrapper[5113]: I1212 14:13:20.730781 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:13:20 crc kubenswrapper[5113]: I1212 14:13:20.731328 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:13:20 crc kubenswrapper[5113]: I1212 14:13:20.854724 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:13:20 crc kubenswrapper[5113]: I1212 14:13:20.854774 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:13:20 crc kubenswrapper[5113]: I1212 14:13:20.889454 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:13:20 crc kubenswrapper[5113]: I1212 14:13:20.889505 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:13:20 crc kubenswrapper[5113]: I1212 14:13:20.985629 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzcm5" event={"ID":"eab91f0e-3c39-4096-8d91-329f7e9812e8","Type":"ContainerStarted","Data":"4d4078a33f54ea983811fb11ea9a645d1675d690c475410b84b353c0431b9d0a"} Dec 12 14:13:21 crc kubenswrapper[5113]: I1212 14:13:21.011876 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rzcm5" podStartSLOduration=10.409049146 podStartE2EDuration="52.011839274s" podCreationTimestamp="2025-12-12 14:12:29 +0000 UTC" firstStartedPulling="2025-12-12 14:12:33.968365869 +0000 UTC m=+136.803615696" lastFinishedPulling="2025-12-12 14:13:15.571155997 +0000 UTC m=+178.406405824" observedRunningTime="2025-12-12 14:13:21.008169963 +0000 UTC m=+183.843419790" watchObservedRunningTime="2025-12-12 14:13:21.011839274 +0000 UTC m=+183.847089101" Dec 12 14:13:21 crc kubenswrapper[5113]: I1212 14:13:21.038695 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6qwjt" podStartSLOduration=13.389386264 podStartE2EDuration="55.038676223s" podCreationTimestamp="2025-12-12 14:12:26 +0000 UTC" firstStartedPulling="2025-12-12 14:12:33.942000787 +0000 UTC m=+136.777250614" lastFinishedPulling="2025-12-12 14:13:15.591290746 +0000 UTC m=+178.426540573" observedRunningTime="2025-12-12 14:13:21.033298071 +0000 UTC m=+183.868547908" watchObservedRunningTime="2025-12-12 14:13:21.038676223 +0000 UTC m=+183.873926040" Dec 12 14:13:21 crc kubenswrapper[5113]: I1212 14:13:21.089797 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-94xtr" podStartSLOduration=12.441646031 podStartE2EDuration="54.089775348s" podCreationTimestamp="2025-12-12 14:12:27 +0000 UTC" firstStartedPulling="2025-12-12 14:12:33.950721599 +0000 UTC m=+136.785971426" lastFinishedPulling="2025-12-12 14:13:15.598850916 +0000 UTC m=+178.434100743" observedRunningTime="2025-12-12 14:13:21.069334278 +0000 UTC m=+183.904584125" watchObservedRunningTime="2025-12-12 14:13:21.089775348 +0000 UTC m=+183.925025175" Dec 12 14:13:21 crc kubenswrapper[5113]: I1212 14:13:21.090146 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=10.090140771 podStartE2EDuration="10.090140771s" podCreationTimestamp="2025-12-12 14:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:13:21.086276623 +0000 UTC m=+183.921526480" watchObservedRunningTime="2025-12-12 14:13:21.090140771 +0000 UTC m=+183.925390598" Dec 12 14:13:21 crc kubenswrapper[5113]: I1212 14:13:21.108814 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xk8lq" podStartSLOduration=12.338209947 podStartE2EDuration="55.108797187s" podCreationTimestamp="2025-12-12 14:12:26 +0000 UTC" firstStartedPulling="2025-12-12 14:12:32.806949705 +0000 UTC m=+135.642199532" lastFinishedPulling="2025-12-12 14:13:15.577536945 +0000 UTC m=+178.412786772" observedRunningTime="2025-12-12 14:13:21.104807904 +0000 UTC m=+183.940057751" watchObservedRunningTime="2025-12-12 14:13:21.108797187 +0000 UTC m=+183.944047014" Dec 12 14:13:21 crc kubenswrapper[5113]: I1212 14:13:21.941315 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:13:21 crc kubenswrapper[5113]: I1212 14:13:21.941463 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:13:22 crc kubenswrapper[5113]: I1212 14:13:22.034824 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9pnt" event={"ID":"da7e4a58-73e9-4950-ba80-c7bbdac8d654","Type":"ContainerStarted","Data":"0ee723303c77271d2781d34ded4414e99abde48b6276b8c2c145a303d2fa5941"} Dec 12 14:13:22 crc kubenswrapper[5113]: I1212 14:13:22.038957 5113 generic.go:358] "Generic (PLEG): container finished" podID="242308b9-716d-4df6-b38a-412f6c34c561" containerID="b0f342f06699e33699d3a0368497b33c05c4d6ddabf2fe996b14087297105325" exitCode=0 Dec 12 14:13:22 crc kubenswrapper[5113]: I1212 14:13:22.039838 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"242308b9-716d-4df6-b38a-412f6c34c561","Type":"ContainerDied","Data":"b0f342f06699e33699d3a0368497b33c05c4d6ddabf2fe996b14087297105325"} Dec 12 14:13:22 crc kubenswrapper[5113]: I1212 14:13:22.078708 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g9pnt" podStartSLOduration=11.454835768 podStartE2EDuration="52.07868044s" podCreationTimestamp="2025-12-12 14:12:30 +0000 UTC" firstStartedPulling="2025-12-12 14:12:35.003302834 +0000 UTC m=+137.838552661" lastFinishedPulling="2025-12-12 14:13:15.627147506 +0000 UTC m=+178.462397333" observedRunningTime="2025-12-12 14:13:22.061232587 +0000 UTC m=+184.896482424" watchObservedRunningTime="2025-12-12 14:13:22.07868044 +0000 UTC m=+184.913930267" Dec 12 14:13:22 crc kubenswrapper[5113]: I1212 14:13:22.337552 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mc7wk" podUID="1599be11-4b1f-4016-b780-1f93afc71aad" containerName="registry-server" probeResult="failure" output=< Dec 12 14:13:22 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Dec 12 14:13:22 crc kubenswrapper[5113]: > Dec 12 14:13:22 crc kubenswrapper[5113]: I1212 14:13:22.342670 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-ppmfs" podUID="2a2741c8-968d-4f77-8be2-35619f1b1f4d" containerName="registry-server" probeResult="failure" output=< Dec 12 14:13:22 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Dec 12 14:13:22 crc kubenswrapper[5113]: > Dec 12 14:13:22 crc kubenswrapper[5113]: I1212 14:13:22.553839 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:13:22 crc kubenswrapper[5113]: I1212 14:13:22.554028 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:13:22 crc kubenswrapper[5113]: I1212 14:13:22.984081 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rzcm5" podUID="eab91f0e-3c39-4096-8d91-329f7e9812e8" containerName="registry-server" probeResult="failure" output=< Dec 12 14:13:22 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Dec 12 14:13:22 crc kubenswrapper[5113]: > Dec 12 14:13:23 crc kubenswrapper[5113]: I1212 14:13:23.323596 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:23 crc kubenswrapper[5113]: I1212 14:13:23.352860 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/242308b9-716d-4df6-b38a-412f6c34c561-kubelet-dir\") pod \"242308b9-716d-4df6-b38a-412f6c34c561\" (UID: \"242308b9-716d-4df6-b38a-412f6c34c561\") " Dec 12 14:13:23 crc kubenswrapper[5113]: I1212 14:13:23.352950 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/242308b9-716d-4df6-b38a-412f6c34c561-kube-api-access\") pod \"242308b9-716d-4df6-b38a-412f6c34c561\" (UID: \"242308b9-716d-4df6-b38a-412f6c34c561\") " Dec 12 14:13:23 crc kubenswrapper[5113]: I1212 14:13:23.353037 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/242308b9-716d-4df6-b38a-412f6c34c561-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "242308b9-716d-4df6-b38a-412f6c34c561" (UID: "242308b9-716d-4df6-b38a-412f6c34c561"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:13:23 crc kubenswrapper[5113]: I1212 14:13:23.353520 5113 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/242308b9-716d-4df6-b38a-412f6c34c561-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:23 crc kubenswrapper[5113]: I1212 14:13:23.377960 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/242308b9-716d-4df6-b38a-412f6c34c561-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "242308b9-716d-4df6-b38a-412f6c34c561" (UID: "242308b9-716d-4df6-b38a-412f6c34c561"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:23 crc kubenswrapper[5113]: I1212 14:13:23.454575 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/242308b9-716d-4df6-b38a-412f6c34c561-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:23 crc kubenswrapper[5113]: I1212 14:13:23.591458 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g9pnt" podUID="da7e4a58-73e9-4950-ba80-c7bbdac8d654" containerName="registry-server" probeResult="failure" output=< Dec 12 14:13:23 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Dec 12 14:13:23 crc kubenswrapper[5113]: > Dec 12 14:13:24 crc kubenswrapper[5113]: I1212 14:13:24.054191 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"242308b9-716d-4df6-b38a-412f6c34c561","Type":"ContainerDied","Data":"1f0182a48bd3a1504d1a914fee2ce33150f24d340004cfd6ab535abb809b13ee"} Dec 12 14:13:24 crc kubenswrapper[5113]: I1212 14:13:24.054254 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f0182a48bd3a1504d1a914fee2ce33150f24d340004cfd6ab535abb809b13ee" Dec 12 14:13:24 crc kubenswrapper[5113]: I1212 14:13:24.054274 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 14:13:27 crc kubenswrapper[5113]: I1212 14:13:27.061373 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:13:27 crc kubenswrapper[5113]: I1212 14:13:27.061428 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:13:27 crc kubenswrapper[5113]: I1212 14:13:27.155389 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:13:27 crc kubenswrapper[5113]: I1212 14:13:27.263632 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:13:27 crc kubenswrapper[5113]: I1212 14:13:27.417339 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:13:27 crc kubenswrapper[5113]: I1212 14:13:27.417422 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:13:27 crc kubenswrapper[5113]: I1212 14:13:27.548148 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:13:28 crc kubenswrapper[5113]: I1212 14:13:28.129178 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:13:28 crc kubenswrapper[5113]: I1212 14:13:28.943301 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:13:28 crc kubenswrapper[5113]: I1212 14:13:28.943423 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:13:29 crc kubenswrapper[5113]: I1212 14:13:29.776172 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:13:29 crc kubenswrapper[5113]: I1212 14:13:29.776472 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:13:29 crc kubenswrapper[5113]: I1212 14:13:29.822520 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:13:30 crc kubenswrapper[5113]: I1212 14:13:30.189768 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:13:30 crc kubenswrapper[5113]: I1212 14:13:30.338015 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:13:30 crc kubenswrapper[5113]: I1212 14:13:30.338504 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:13:30 crc kubenswrapper[5113]: I1212 14:13:30.387890 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:13:30 crc kubenswrapper[5113]: I1212 14:13:30.727756 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-wnqfw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 12 14:13:30 crc kubenswrapper[5113]: I1212 14:13:30.727855 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-wnqfw" podUID="4723fa2f-a114-4d27-875f-951678d39dde" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 12 14:13:30 crc kubenswrapper[5113]: I1212 14:13:30.960788 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:13:31 crc kubenswrapper[5113]: I1212 14:13:31.026737 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:13:31 crc kubenswrapper[5113]: I1212 14:13:31.067289 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:13:31 crc kubenswrapper[5113]: I1212 14:13:31.210213 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:13:31 crc kubenswrapper[5113]: I1212 14:13:31.263261 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:13:31 crc kubenswrapper[5113]: I1212 14:13:31.988005 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:13:32 crc kubenswrapper[5113]: I1212 14:13:32.030020 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:13:32 crc kubenswrapper[5113]: I1212 14:13:32.475032 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qwjt"] Dec 12 14:13:32 crc kubenswrapper[5113]: I1212 14:13:32.475594 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6qwjt" podUID="3128ee41-7d4c-448e-b5f1-8c73827104e9" containerName="registry-server" containerID="cri-o://d90acab17b3fefef26c68bbfc8b43c61c30ba26b6a72091257b05669dc28518b" gracePeriod=2 Dec 12 14:13:32 crc kubenswrapper[5113]: I1212 14:13:32.646950 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:13:32 crc kubenswrapper[5113]: I1212 14:13:32.704574 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:13:33 crc kubenswrapper[5113]: I1212 14:13:33.476349 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g9pnt"] Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.131971 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qwjt" event={"ID":"3128ee41-7d4c-448e-b5f1-8c73827104e9","Type":"ContainerDied","Data":"d90acab17b3fefef26c68bbfc8b43c61c30ba26b6a72091257b05669dc28518b"} Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.131868 5113 generic.go:358] "Generic (PLEG): container finished" podID="3128ee41-7d4c-448e-b5f1-8c73827104e9" containerID="d90acab17b3fefef26c68bbfc8b43c61c30ba26b6a72091257b05669dc28518b" exitCode=0 Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.132800 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g9pnt" podUID="da7e4a58-73e9-4950-ba80-c7bbdac8d654" containerName="registry-server" containerID="cri-o://0ee723303c77271d2781d34ded4414e99abde48b6276b8c2c145a303d2fa5941" gracePeriod=2 Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.533077 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.577939 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3128ee41-7d4c-448e-b5f1-8c73827104e9-utilities\") pod \"3128ee41-7d4c-448e-b5f1-8c73827104e9\" (UID: \"3128ee41-7d4c-448e-b5f1-8c73827104e9\") " Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.578095 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3128ee41-7d4c-448e-b5f1-8c73827104e9-catalog-content\") pod \"3128ee41-7d4c-448e-b5f1-8c73827104e9\" (UID: \"3128ee41-7d4c-448e-b5f1-8c73827104e9\") " Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.578149 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j5w7\" (UniqueName: \"kubernetes.io/projected/3128ee41-7d4c-448e-b5f1-8c73827104e9-kube-api-access-8j5w7\") pod \"3128ee41-7d4c-448e-b5f1-8c73827104e9\" (UID: \"3128ee41-7d4c-448e-b5f1-8c73827104e9\") " Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.579717 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3128ee41-7d4c-448e-b5f1-8c73827104e9-utilities" (OuterVolumeSpecName: "utilities") pod "3128ee41-7d4c-448e-b5f1-8c73827104e9" (UID: "3128ee41-7d4c-448e-b5f1-8c73827104e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.585326 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3128ee41-7d4c-448e-b5f1-8c73827104e9-kube-api-access-8j5w7" (OuterVolumeSpecName: "kube-api-access-8j5w7") pod "3128ee41-7d4c-448e-b5f1-8c73827104e9" (UID: "3128ee41-7d4c-448e-b5f1-8c73827104e9"). InnerVolumeSpecName "kube-api-access-8j5w7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.680714 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8j5w7\" (UniqueName: \"kubernetes.io/projected/3128ee41-7d4c-448e-b5f1-8c73827104e9-kube-api-access-8j5w7\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.680757 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3128ee41-7d4c-448e-b5f1-8c73827104e9-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.764261 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3128ee41-7d4c-448e-b5f1-8c73827104e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3128ee41-7d4c-448e-b5f1-8c73827104e9" (UID: "3128ee41-7d4c-448e-b5f1-8c73827104e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.783050 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3128ee41-7d4c-448e-b5f1-8c73827104e9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.876988 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94xtr"] Dec 12 14:13:34 crc kubenswrapper[5113]: I1212 14:13:34.877381 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-94xtr" podUID="26ca03d9-d718-4adb-8c84-0386e98421c2" containerName="registry-server" containerID="cri-o://e5d6291ef382c2a1b81963fb24cf3af26c3aebba102af936cd0a3952359ac070" gracePeriod=2 Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.206918 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qwjt" event={"ID":"3128ee41-7d4c-448e-b5f1-8c73827104e9","Type":"ContainerDied","Data":"b6f97e3b9e86e395c8c316a8a5c25d4eb21d6e872f6a8b5a4fafd50e793c954d"} Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.206979 5113 scope.go:117] "RemoveContainer" containerID="d90acab17b3fefef26c68bbfc8b43c61c30ba26b6a72091257b05669dc28518b" Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.207185 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qwjt" Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.221172 5113 generic.go:358] "Generic (PLEG): container finished" podID="da7e4a58-73e9-4950-ba80-c7bbdac8d654" containerID="0ee723303c77271d2781d34ded4414e99abde48b6276b8c2c145a303d2fa5941" exitCode=0 Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.221238 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9pnt" event={"ID":"da7e4a58-73e9-4950-ba80-c7bbdac8d654","Type":"ContainerDied","Data":"0ee723303c77271d2781d34ded4414e99abde48b6276b8c2c145a303d2fa5941"} Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.234043 5113 scope.go:117] "RemoveContainer" containerID="8289d9f56a7fb449b02acee3365d6e8103cfa637ade1733dbb9441b95363132e" Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.244506 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qwjt"] Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.247083 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6qwjt"] Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.264880 5113 scope.go:117] "RemoveContainer" containerID="d3cfcd55a8d4d2e50c3daa5c14f768b48e376649d9760f1f29e56653d8a88711" Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.419947 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.489767 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3128ee41-7d4c-448e-b5f1-8c73827104e9" path="/var/lib/kubelet/pods/3128ee41-7d4c-448e-b5f1-8c73827104e9/volumes" Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.494428 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7e4a58-73e9-4950-ba80-c7bbdac8d654-catalog-content\") pod \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\" (UID: \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\") " Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.494551 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5bc6\" (UniqueName: \"kubernetes.io/projected/da7e4a58-73e9-4950-ba80-c7bbdac8d654-kube-api-access-b5bc6\") pod \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\" (UID: \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\") " Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.494667 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7e4a58-73e9-4950-ba80-c7bbdac8d654-utilities\") pod \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\" (UID: \"da7e4a58-73e9-4950-ba80-c7bbdac8d654\") " Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.495696 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da7e4a58-73e9-4950-ba80-c7bbdac8d654-utilities" (OuterVolumeSpecName: "utilities") pod "da7e4a58-73e9-4950-ba80-c7bbdac8d654" (UID: "da7e4a58-73e9-4950-ba80-c7bbdac8d654"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.505368 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7e4a58-73e9-4950-ba80-c7bbdac8d654-kube-api-access-b5bc6" (OuterVolumeSpecName: "kube-api-access-b5bc6") pod "da7e4a58-73e9-4950-ba80-c7bbdac8d654" (UID: "da7e4a58-73e9-4950-ba80-c7bbdac8d654"). InnerVolumeSpecName "kube-api-access-b5bc6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.595849 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b5bc6\" (UniqueName: \"kubernetes.io/projected/da7e4a58-73e9-4950-ba80-c7bbdac8d654-kube-api-access-b5bc6\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.595888 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7e4a58-73e9-4950-ba80-c7bbdac8d654-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.615974 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da7e4a58-73e9-4950-ba80-c7bbdac8d654-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da7e4a58-73e9-4950-ba80-c7bbdac8d654" (UID: "da7e4a58-73e9-4950-ba80-c7bbdac8d654"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.697246 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7e4a58-73e9-4950-ba80-c7bbdac8d654-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.877656 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mc7wk"] Dec 12 14:13:35 crc kubenswrapper[5113]: I1212 14:13:35.878402 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mc7wk" podUID="1599be11-4b1f-4016-b780-1f93afc71aad" containerName="registry-server" containerID="cri-o://52d19997fa3cb59e59f4fee390700d36334c59ffb9a8e930ca96989d84317034" gracePeriod=2 Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.228730 5113 generic.go:358] "Generic (PLEG): container finished" podID="26ca03d9-d718-4adb-8c84-0386e98421c2" containerID="e5d6291ef382c2a1b81963fb24cf3af26c3aebba102af936cd0a3952359ac070" exitCode=0 Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.228785 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xtr" event={"ID":"26ca03d9-d718-4adb-8c84-0386e98421c2","Type":"ContainerDied","Data":"e5d6291ef382c2a1b81963fb24cf3af26c3aebba102af936cd0a3952359ac070"} Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.231319 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9pnt" Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.231325 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9pnt" event={"ID":"da7e4a58-73e9-4950-ba80-c7bbdac8d654","Type":"ContainerDied","Data":"84349f8f03bae683f0a2645abb146993f20ba22f45546ae7520c235416837abb"} Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.231388 5113 scope.go:117] "RemoveContainer" containerID="0ee723303c77271d2781d34ded4414e99abde48b6276b8c2c145a303d2fa5941" Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.259371 5113 scope.go:117] "RemoveContainer" containerID="fa09f1bce15ef091690c85e88ce54f20557ae077fbd4fbd3a9fd66edd3cf2a6d" Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.263562 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g9pnt"] Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.267362 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g9pnt"] Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.286901 5113 scope.go:117] "RemoveContainer" containerID="c41afacdf7e09cdcb0258fc846a3c6c365058645f3acd344c19f0b9cb44c93b2" Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.680307 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.810422 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ca03d9-d718-4adb-8c84-0386e98421c2-utilities\") pod \"26ca03d9-d718-4adb-8c84-0386e98421c2\" (UID: \"26ca03d9-d718-4adb-8c84-0386e98421c2\") " Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.810661 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ca03d9-d718-4adb-8c84-0386e98421c2-catalog-content\") pod \"26ca03d9-d718-4adb-8c84-0386e98421c2\" (UID: \"26ca03d9-d718-4adb-8c84-0386e98421c2\") " Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.810696 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vdl6\" (UniqueName: \"kubernetes.io/projected/26ca03d9-d718-4adb-8c84-0386e98421c2-kube-api-access-5vdl6\") pod \"26ca03d9-d718-4adb-8c84-0386e98421c2\" (UID: \"26ca03d9-d718-4adb-8c84-0386e98421c2\") " Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.812318 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ca03d9-d718-4adb-8c84-0386e98421c2-utilities" (OuterVolumeSpecName: "utilities") pod "26ca03d9-d718-4adb-8c84-0386e98421c2" (UID: "26ca03d9-d718-4adb-8c84-0386e98421c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.816103 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ca03d9-d718-4adb-8c84-0386e98421c2-kube-api-access-5vdl6" (OuterVolumeSpecName: "kube-api-access-5vdl6") pod "26ca03d9-d718-4adb-8c84-0386e98421c2" (UID: "26ca03d9-d718-4adb-8c84-0386e98421c2"). InnerVolumeSpecName "kube-api-access-5vdl6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.858698 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ca03d9-d718-4adb-8c84-0386e98421c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26ca03d9-d718-4adb-8c84-0386e98421c2" (UID: "26ca03d9-d718-4adb-8c84-0386e98421c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.912400 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ca03d9-d718-4adb-8c84-0386e98421c2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.912438 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5vdl6\" (UniqueName: \"kubernetes.io/projected/26ca03d9-d718-4adb-8c84-0386e98421c2-kube-api-access-5vdl6\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:36 crc kubenswrapper[5113]: I1212 14:13:36.912448 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ca03d9-d718-4adb-8c84-0386e98421c2-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:37 crc kubenswrapper[5113]: I1212 14:13:37.238767 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xtr" event={"ID":"26ca03d9-d718-4adb-8c84-0386e98421c2","Type":"ContainerDied","Data":"345c65b7d857446476b9e5d9693105fe3ad7e59c65fcca7431fb00a9d1fab41c"} Dec 12 14:13:37 crc kubenswrapper[5113]: I1212 14:13:37.238832 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94xtr" Dec 12 14:13:37 crc kubenswrapper[5113]: I1212 14:13:37.238893 5113 scope.go:117] "RemoveContainer" containerID="e5d6291ef382c2a1b81963fb24cf3af26c3aebba102af936cd0a3952359ac070" Dec 12 14:13:37 crc kubenswrapper[5113]: I1212 14:13:37.242112 5113 generic.go:358] "Generic (PLEG): container finished" podID="1599be11-4b1f-4016-b780-1f93afc71aad" containerID="52d19997fa3cb59e59f4fee390700d36334c59ffb9a8e930ca96989d84317034" exitCode=0 Dec 12 14:13:37 crc kubenswrapper[5113]: I1212 14:13:37.242454 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mc7wk" event={"ID":"1599be11-4b1f-4016-b780-1f93afc71aad","Type":"ContainerDied","Data":"52d19997fa3cb59e59f4fee390700d36334c59ffb9a8e930ca96989d84317034"} Dec 12 14:13:37 crc kubenswrapper[5113]: I1212 14:13:37.255212 5113 scope.go:117] "RemoveContainer" containerID="a2910b02aab40413d4a0677e459c8516faba51a06b0a09268c18a925d975100a" Dec 12 14:13:37 crc kubenswrapper[5113]: I1212 14:13:37.278244 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94xtr"] Dec 12 14:13:37 crc kubenswrapper[5113]: I1212 14:13:37.279846 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-94xtr"] Dec 12 14:13:37 crc kubenswrapper[5113]: I1212 14:13:37.288606 5113 scope.go:117] "RemoveContainer" containerID="557167fb8245824e0ee046216245803952a094dfd0ac63e799a620ec898062b5" Dec 12 14:13:37 crc kubenswrapper[5113]: I1212 14:13:37.489677 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ca03d9-d718-4adb-8c84-0386e98421c2" path="/var/lib/kubelet/pods/26ca03d9-d718-4adb-8c84-0386e98421c2/volumes" Dec 12 14:13:37 crc kubenswrapper[5113]: I1212 14:13:37.490462 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da7e4a58-73e9-4950-ba80-c7bbdac8d654" path="/var/lib/kubelet/pods/da7e4a58-73e9-4950-ba80-c7bbdac8d654/volumes" Dec 12 14:13:38 crc kubenswrapper[5113]: I1212 14:13:38.561278 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:13:38 crc kubenswrapper[5113]: I1212 14:13:38.638223 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1599be11-4b1f-4016-b780-1f93afc71aad-catalog-content\") pod \"1599be11-4b1f-4016-b780-1f93afc71aad\" (UID: \"1599be11-4b1f-4016-b780-1f93afc71aad\") " Dec 12 14:13:38 crc kubenswrapper[5113]: I1212 14:13:38.638309 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1599be11-4b1f-4016-b780-1f93afc71aad-utilities\") pod \"1599be11-4b1f-4016-b780-1f93afc71aad\" (UID: \"1599be11-4b1f-4016-b780-1f93afc71aad\") " Dec 12 14:13:38 crc kubenswrapper[5113]: I1212 14:13:38.638474 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nzsw\" (UniqueName: \"kubernetes.io/projected/1599be11-4b1f-4016-b780-1f93afc71aad-kube-api-access-8nzsw\") pod \"1599be11-4b1f-4016-b780-1f93afc71aad\" (UID: \"1599be11-4b1f-4016-b780-1f93afc71aad\") " Dec 12 14:13:38 crc kubenswrapper[5113]: I1212 14:13:38.639145 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1599be11-4b1f-4016-b780-1f93afc71aad-utilities" (OuterVolumeSpecName: "utilities") pod "1599be11-4b1f-4016-b780-1f93afc71aad" (UID: "1599be11-4b1f-4016-b780-1f93afc71aad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:38 crc kubenswrapper[5113]: I1212 14:13:38.644690 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1599be11-4b1f-4016-b780-1f93afc71aad-kube-api-access-8nzsw" (OuterVolumeSpecName: "kube-api-access-8nzsw") pod "1599be11-4b1f-4016-b780-1f93afc71aad" (UID: "1599be11-4b1f-4016-b780-1f93afc71aad"). InnerVolumeSpecName "kube-api-access-8nzsw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:38 crc kubenswrapper[5113]: I1212 14:13:38.648479 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1599be11-4b1f-4016-b780-1f93afc71aad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1599be11-4b1f-4016-b780-1f93afc71aad" (UID: "1599be11-4b1f-4016-b780-1f93afc71aad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:13:38 crc kubenswrapper[5113]: I1212 14:13:38.744657 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1599be11-4b1f-4016-b780-1f93afc71aad-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:38 crc kubenswrapper[5113]: I1212 14:13:38.744696 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1599be11-4b1f-4016-b780-1f93afc71aad-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:38 crc kubenswrapper[5113]: I1212 14:13:38.744706 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nzsw\" (UniqueName: \"kubernetes.io/projected/1599be11-4b1f-4016-b780-1f93afc71aad-kube-api-access-8nzsw\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:38 crc kubenswrapper[5113]: I1212 14:13:38.962615 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-wnqfw" Dec 12 14:13:39 crc kubenswrapper[5113]: I1212 14:13:39.263837 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mc7wk" event={"ID":"1599be11-4b1f-4016-b780-1f93afc71aad","Type":"ContainerDied","Data":"cddd8b2db4311506e7c7671f746bb908e3b9edce6f9615086606616fb938c6e1"} Dec 12 14:13:39 crc kubenswrapper[5113]: I1212 14:13:39.264226 5113 scope.go:117] "RemoveContainer" containerID="52d19997fa3cb59e59f4fee390700d36334c59ffb9a8e930ca96989d84317034" Dec 12 14:13:39 crc kubenswrapper[5113]: I1212 14:13:39.263863 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mc7wk" Dec 12 14:13:39 crc kubenswrapper[5113]: I1212 14:13:39.291962 5113 scope.go:117] "RemoveContainer" containerID="d4503e0223348ba24b7dbd4f2494764cfaafef453fff1c4f144270fe619ee60c" Dec 12 14:13:39 crc kubenswrapper[5113]: I1212 14:13:39.296066 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mc7wk"] Dec 12 14:13:39 crc kubenswrapper[5113]: I1212 14:13:39.298214 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mc7wk"] Dec 12 14:13:39 crc kubenswrapper[5113]: I1212 14:13:39.313757 5113 scope.go:117] "RemoveContainer" containerID="82e1dcba316aeccc72fc05489ea4719e23f672dce485f9ae5c511464f75cd59a" Dec 12 14:13:39 crc kubenswrapper[5113]: I1212 14:13:39.490374 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1599be11-4b1f-4016-b780-1f93afc71aad" path="/var/lib/kubelet/pods/1599be11-4b1f-4016-b780-1f93afc71aad/volumes" Dec 12 14:13:47 crc kubenswrapper[5113]: I1212 14:13:47.713728 5113 pod_container_manager_linux.go:217] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod5311c643-bfa2-4959-bc65-a6e4e4f5cd22"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod5311c643-bfa2-4959-bc65-a6e4e4f5cd22] : Timed out while waiting for systemd to remove kubepods-burstable-pod5311c643_bfa2_4959_bc65_a6e4e4f5cd22.slice" Dec 12 14:13:47 crc kubenswrapper[5113]: E1212 14:13:47.715365 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod5311c643-bfa2-4959-bc65-a6e4e4f5cd22] : unable to destroy cgroup paths for cgroup [kubepods burstable pod5311c643-bfa2-4959-bc65-a6e4e4f5cd22] : Timed out while waiting for systemd to remove kubepods-burstable-pod5311c643_bfa2_4959_bc65_a6e4e4f5cd22.slice" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" podUID="5311c643-bfa2-4959-bc65-a6e4e4f5cd22" Dec 12 14:13:48 crc kubenswrapper[5113]: I1212 14:13:48.343755 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rlsj9" Dec 12 14:13:48 crc kubenswrapper[5113]: I1212 14:13:48.361426 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rlsj9"] Dec 12 14:13:48 crc kubenswrapper[5113]: I1212 14:13:48.364154 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rlsj9"] Dec 12 14:13:49 crc kubenswrapper[5113]: I1212 14:13:49.493169 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5311c643-bfa2-4959-bc65-a6e4e4f5cd22" path="/var/lib/kubelet/pods/5311c643-bfa2-4959-bc65-a6e4e4f5cd22/volumes" Dec 12 14:13:52 crc kubenswrapper[5113]: I1212 14:13:52.451059 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2t5sb"] Dec 12 14:13:55 crc kubenswrapper[5113]: I1212 14:13:55.791360 5113 ???:1] "http: TLS handshake error from 192.168.126.11:42584: no serving certificate available for the kubelet" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.071960 5113 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.072750 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://ae59a492a24b7e319fb6e2535bd395840e015bc16f625d44a75bc0d8b996b8e6" gracePeriod=15 Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.072875 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://206f507a6fdf88d3e1d29676b4a01b01c2d876ce2806953af724790345c9e763" gracePeriod=15 Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.072901 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b580f18ad4f07a1213ec639cdb9df787c5ef723b26eded55ee758cb6f9f62cb9" gracePeriod=15 Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.072911 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4dcb1f921c832112e5a3717359d76d330218be7e53f95c41d75b5738ce073c00" gracePeriod=15 Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.073037 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1b9ea27857a99679db2fd57e2d2ab8a18cb0ceb4dbb017a67fa058f8afd2f605" gracePeriod=15 Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.076320 5113 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.077025 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="26ca03d9-d718-4adb-8c84-0386e98421c2" containerName="registry-server" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.077051 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ca03d9-d718-4adb-8c84-0386e98421c2" containerName="registry-server" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.077062 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1599be11-4b1f-4016-b780-1f93afc71aad" containerName="extract-utilities" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.077069 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="1599be11-4b1f-4016-b780-1f93afc71aad" containerName="extract-utilities" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080544 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080606 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080661 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080699 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080751 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="242308b9-716d-4df6-b38a-412f6c34c561" containerName="pruner" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080761 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="242308b9-716d-4df6-b38a-412f6c34c561" containerName="pruner" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080774 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da7e4a58-73e9-4950-ba80-c7bbdac8d654" containerName="extract-utilities" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080793 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7e4a58-73e9-4950-ba80-c7bbdac8d654" containerName="extract-utilities" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080815 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="26ca03d9-d718-4adb-8c84-0386e98421c2" containerName="extract-utilities" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080822 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ca03d9-d718-4adb-8c84-0386e98421c2" containerName="extract-utilities" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080885 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1599be11-4b1f-4016-b780-1f93afc71aad" containerName="extract-content" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080954 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="1599be11-4b1f-4016-b780-1f93afc71aad" containerName="extract-content" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080968 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da7e4a58-73e9-4950-ba80-c7bbdac8d654" containerName="extract-content" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080975 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7e4a58-73e9-4950-ba80-c7bbdac8d654" containerName="extract-content" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.080993 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3128ee41-7d4c-448e-b5f1-8c73827104e9" containerName="extract-utilities" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081002 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3128ee41-7d4c-448e-b5f1-8c73827104e9" containerName="extract-utilities" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081011 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081020 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081031 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1599be11-4b1f-4016-b780-1f93afc71aad" containerName="registry-server" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081037 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="1599be11-4b1f-4016-b780-1f93afc71aad" containerName="registry-server" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081047 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da7e4a58-73e9-4950-ba80-c7bbdac8d654" containerName="registry-server" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081052 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7e4a58-73e9-4950-ba80-c7bbdac8d654" containerName="registry-server" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081065 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081072 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081080 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081088 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081096 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081102 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081114 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3128ee41-7d4c-448e-b5f1-8c73827104e9" containerName="extract-content" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081145 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3128ee41-7d4c-448e-b5f1-8c73827104e9" containerName="extract-content" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081157 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3128ee41-7d4c-448e-b5f1-8c73827104e9" containerName="registry-server" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081163 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3128ee41-7d4c-448e-b5f1-8c73827104e9" containerName="registry-server" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081177 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081678 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081738 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5311c643-bfa2-4959-bc65-a6e4e4f5cd22" containerName="kube-multus-additional-cni-plugins" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081765 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="5311c643-bfa2-4959-bc65-a6e4e4f5cd22" containerName="kube-multus-additional-cni-plugins" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081786 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081794 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081906 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="26ca03d9-d718-4adb-8c84-0386e98421c2" containerName="extract-content" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.081942 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ca03d9-d718-4adb-8c84-0386e98421c2" containerName="extract-content" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082452 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082725 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082756 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="da7e4a58-73e9-4950-ba80-c7bbdac8d654" containerName="registry-server" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082767 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="5311c643-bfa2-4959-bc65-a6e4e4f5cd22" containerName="kube-multus-additional-cni-plugins" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082778 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082789 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082797 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="1599be11-4b1f-4016-b780-1f93afc71aad" containerName="registry-server" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082807 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="242308b9-716d-4df6-b38a-412f6c34c561" containerName="pruner" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082815 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082825 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082835 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3128ee41-7d4c-448e-b5f1-8c73827104e9" containerName="registry-server" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082844 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="26ca03d9-d718-4adb-8c84-0386e98421c2" containerName="registry-server" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082856 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.082866 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.083957 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.084043 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.084057 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.084074 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.084203 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.137348 5113 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.141616 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.148614 5113 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.173755 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: E1212 14:13:58.174421 5113 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.192561 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.192631 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.192648 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.192672 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.192703 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.192727 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.192753 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.192770 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.192789 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.192831 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.293744 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.293809 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.293837 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.293854 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.293880 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.293900 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.293920 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.293948 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.293965 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.293980 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.294073 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.294111 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.294150 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.294170 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.294192 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.294214 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.294318 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.294354 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.294410 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.294537 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.404453 5113 generic.go:358] "Generic (PLEG): container finished" podID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" containerID="3f0115cb4749d823ebfa90072687842a09de9bb7ce9d4d5c5c084ba88781c9f4" exitCode=0 Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.404544 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a11e3c14-75a9-4f84-b320-13b8f4cd509e","Type":"ContainerDied","Data":"3f0115cb4749d823ebfa90072687842a09de9bb7ce9d4d5c5c084ba88781c9f4"} Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.405153 5113 status_manager.go:895] "Failed to get status for pod" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.406474 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.407917 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.408697 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="1b9ea27857a99679db2fd57e2d2ab8a18cb0ceb4dbb017a67fa058f8afd2f605" exitCode=0 Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.408721 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4dcb1f921c832112e5a3717359d76d330218be7e53f95c41d75b5738ce073c00" exitCode=0 Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.408731 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b580f18ad4f07a1213ec639cdb9df787c5ef723b26eded55ee758cb6f9f62cb9" exitCode=0 Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.408740 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="206f507a6fdf88d3e1d29676b4a01b01c2d876ce2806953af724790345c9e763" exitCode=2 Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.408747 5113 scope.go:117] "RemoveContainer" containerID="b77d3f8a0aa844d1d9bce00a4fbc928f96ec80e137240a6c83676a75400166f5" Dec 12 14:13:58 crc kubenswrapper[5113]: I1212 14:13:58.474943 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:58 crc kubenswrapper[5113]: E1212 14:13:58.497095 5113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18807d507830cd6b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:13:58.496554347 +0000 UTC m=+221.331804174,LastTimestamp:2025-12-12 14:13:58.496554347 +0000 UTC m=+221.331804174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.415503 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.418109 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"e083dfd03c1dcce8c7b726552b005f3477f946677061476a5ae74f049bd534d1"} Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.418196 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"586858b45a7e0842cfd5e6a2dde9b2ac4d444df5149a2fe9e4f7583b48e6d58a"} Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.418506 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.419025 5113 status_manager.go:895] "Failed to get status for pod" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:13:59 crc kubenswrapper[5113]: E1212 14:13:59.419042 5113 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.626944 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.627854 5113 status_manager.go:895] "Failed to get status for pod" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.714362 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a11e3c14-75a9-4f84-b320-13b8f4cd509e-kubelet-dir\") pod \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\" (UID: \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\") " Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.714522 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a11e3c14-75a9-4f84-b320-13b8f4cd509e-var-lock\") pod \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\" (UID: \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\") " Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.714557 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a11e3c14-75a9-4f84-b320-13b8f4cd509e-kube-api-access\") pod \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\" (UID: \"a11e3c14-75a9-4f84-b320-13b8f4cd509e\") " Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.714567 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a11e3c14-75a9-4f84-b320-13b8f4cd509e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a11e3c14-75a9-4f84-b320-13b8f4cd509e" (UID: "a11e3c14-75a9-4f84-b320-13b8f4cd509e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.714682 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a11e3c14-75a9-4f84-b320-13b8f4cd509e-var-lock" (OuterVolumeSpecName: "var-lock") pod "a11e3c14-75a9-4f84-b320-13b8f4cd509e" (UID: "a11e3c14-75a9-4f84-b320-13b8f4cd509e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.715093 5113 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a11e3c14-75a9-4f84-b320-13b8f4cd509e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.715156 5113 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a11e3c14-75a9-4f84-b320-13b8f4cd509e-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.721837 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11e3c14-75a9-4f84-b320-13b8f4cd509e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a11e3c14-75a9-4f84-b320-13b8f4cd509e" (UID: "a11e3c14-75a9-4f84-b320-13b8f4cd509e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:13:59 crc kubenswrapper[5113]: I1212 14:13:59.816305 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a11e3c14-75a9-4f84-b320-13b8f4cd509e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.425605 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.426512 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ae59a492a24b7e319fb6e2535bd395840e015bc16f625d44a75bc0d8b996b8e6" exitCode=0 Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.428078 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a11e3c14-75a9-4f84-b320-13b8f4cd509e","Type":"ContainerDied","Data":"a252a93945dbd7fd040339361361d88bfcda1f27c3ee4e2c5d8cf3cf52c0fed1"} Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.428140 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a252a93945dbd7fd040339361361d88bfcda1f27c3ee4e2c5d8cf3cf52c0fed1" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.428275 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.488220 5113 status_manager.go:895] "Failed to get status for pod" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.491485 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.492261 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.492913 5113 status_manager.go:895] "Failed to get status for pod" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.493599 5113 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.534138 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.534270 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.534322 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.534412 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.534482 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.534414 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.534505 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.534669 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.534940 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.535971 5113 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.536028 5113 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.536043 5113 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.536054 5113 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.538202 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:14:00 crc kubenswrapper[5113]: I1212 14:14:00.637765 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:01 crc kubenswrapper[5113]: I1212 14:14:01.437345 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 14:14:01 crc kubenswrapper[5113]: I1212 14:14:01.438670 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:01 crc kubenswrapper[5113]: I1212 14:14:01.438682 5113 scope.go:117] "RemoveContainer" containerID="1b9ea27857a99679db2fd57e2d2ab8a18cb0ceb4dbb017a67fa058f8afd2f605" Dec 12 14:14:01 crc kubenswrapper[5113]: I1212 14:14:01.455812 5113 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:01 crc kubenswrapper[5113]: I1212 14:14:01.457535 5113 status_manager.go:895] "Failed to get status for pod" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:01 crc kubenswrapper[5113]: I1212 14:14:01.458512 5113 scope.go:117] "RemoveContainer" containerID="4dcb1f921c832112e5a3717359d76d330218be7e53f95c41d75b5738ce073c00" Dec 12 14:14:01 crc kubenswrapper[5113]: I1212 14:14:01.471961 5113 scope.go:117] "RemoveContainer" containerID="b580f18ad4f07a1213ec639cdb9df787c5ef723b26eded55ee758cb6f9f62cb9" Dec 12 14:14:01 crc kubenswrapper[5113]: I1212 14:14:01.487785 5113 scope.go:117] "RemoveContainer" containerID="206f507a6fdf88d3e1d29676b4a01b01c2d876ce2806953af724790345c9e763" Dec 12 14:14:01 crc kubenswrapper[5113]: I1212 14:14:01.489492 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 12 14:14:01 crc kubenswrapper[5113]: I1212 14:14:01.511037 5113 scope.go:117] "RemoveContainer" containerID="ae59a492a24b7e319fb6e2535bd395840e015bc16f625d44a75bc0d8b996b8e6" Dec 12 14:14:01 crc kubenswrapper[5113]: I1212 14:14:01.526867 5113 scope.go:117] "RemoveContainer" containerID="ccb7f9b929e828dd8220bfa92ffca05f4a2a78a97f17a86787dcf729ff4feafc" Dec 12 14:14:02 crc kubenswrapper[5113]: E1212 14:14:02.346248 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:02 crc kubenswrapper[5113]: E1212 14:14:02.347155 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:02 crc kubenswrapper[5113]: E1212 14:14:02.347412 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:02 crc kubenswrapper[5113]: E1212 14:14:02.347597 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:02 crc kubenswrapper[5113]: E1212 14:14:02.347758 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:02 crc kubenswrapper[5113]: I1212 14:14:02.347775 5113 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 12 14:14:02 crc kubenswrapper[5113]: E1212 14:14:02.347931 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="200ms" Dec 12 14:14:02 crc kubenswrapper[5113]: E1212 14:14:02.548444 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="400ms" Dec 12 14:14:02 crc kubenswrapper[5113]: E1212 14:14:02.948950 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="800ms" Dec 12 14:14:03 crc kubenswrapper[5113]: E1212 14:14:03.750729 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="1.6s" Dec 12 14:14:03 crc kubenswrapper[5113]: E1212 14:14:03.855712 5113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18807d507830cd6b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:13:58.496554347 +0000 UTC m=+221.331804174,LastTimestamp:2025-12-12 14:13:58.496554347 +0000 UTC m=+221.331804174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:14:05 crc kubenswrapper[5113]: E1212 14:14:05.351529 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="3.2s" Dec 12 14:14:07 crc kubenswrapper[5113]: I1212 14:14:07.487170 5113 status_manager.go:895] "Failed to get status for pod" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:08 crc kubenswrapper[5113]: E1212 14:14:08.502757 5113 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" volumeName="registry-storage" Dec 12 14:14:08 crc kubenswrapper[5113]: E1212 14:14:08.553066 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="6.4s" Dec 12 14:14:11 crc kubenswrapper[5113]: E1212 14:14:11.269580 5113 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-conmon-8a3faa9196622f095cf411cc0eda711dbe60bf2d7c1dd872fb6909902318f8ec.scope\": RecentStats: unable to find data in memory cache]" Dec 12 14:14:11 crc kubenswrapper[5113]: I1212 14:14:11.511076 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:14:11 crc kubenswrapper[5113]: I1212 14:14:11.511164 5113 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="8a3faa9196622f095cf411cc0eda711dbe60bf2d7c1dd872fb6909902318f8ec" exitCode=1 Dec 12 14:14:11 crc kubenswrapper[5113]: I1212 14:14:11.511245 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"8a3faa9196622f095cf411cc0eda711dbe60bf2d7c1dd872fb6909902318f8ec"} Dec 12 14:14:11 crc kubenswrapper[5113]: I1212 14:14:11.511755 5113 scope.go:117] "RemoveContainer" containerID="8a3faa9196622f095cf411cc0eda711dbe60bf2d7c1dd872fb6909902318f8ec" Dec 12 14:14:11 crc kubenswrapper[5113]: I1212 14:14:11.512256 5113 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:11 crc kubenswrapper[5113]: I1212 14:14:11.512681 5113 status_manager.go:895] "Failed to get status for pod" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:12 crc kubenswrapper[5113]: I1212 14:14:12.482464 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:12 crc kubenswrapper[5113]: I1212 14:14:12.483829 5113 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:12 crc kubenswrapper[5113]: I1212 14:14:12.484399 5113 status_manager.go:895] "Failed to get status for pod" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:12 crc kubenswrapper[5113]: I1212 14:14:12.501110 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5336457e-56b8-4455-9cbb-388bab880a59" Dec 12 14:14:12 crc kubenswrapper[5113]: I1212 14:14:12.501203 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5336457e-56b8-4455-9cbb-388bab880a59" Dec 12 14:14:12 crc kubenswrapper[5113]: E1212 14:14:12.501850 5113 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:12 crc kubenswrapper[5113]: I1212 14:14:12.502291 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:12 crc kubenswrapper[5113]: I1212 14:14:12.533514 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:14:12 crc kubenswrapper[5113]: I1212 14:14:12.533686 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"e2ffd194f1f00970d34e31ac7050b8df6d96c962608fbd13938576d34894e96b"} Dec 12 14:14:12 crc kubenswrapper[5113]: I1212 14:14:12.535053 5113 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:12 crc kubenswrapper[5113]: I1212 14:14:12.535727 5113 status_manager.go:895] "Failed to get status for pod" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:12 crc kubenswrapper[5113]: W1212 14:14:12.540781 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-789e28baf1d451e5956ad2db435c36d10a94afb9dd9b05a88139e47ba36e672c WatchSource:0}: Error finding container 789e28baf1d451e5956ad2db435c36d10a94afb9dd9b05a88139e47ba36e672c: Status 404 returned error can't find the container with id 789e28baf1d451e5956ad2db435c36d10a94afb9dd9b05a88139e47ba36e672c Dec 12 14:14:13 crc kubenswrapper[5113]: I1212 14:14:13.545767 5113 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="1fd17f35ca8720781c82af73ff85dc4d11ed324f733ab4c1000b3db2450fc96f" exitCode=0 Dec 12 14:14:13 crc kubenswrapper[5113]: I1212 14:14:13.545952 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"1fd17f35ca8720781c82af73ff85dc4d11ed324f733ab4c1000b3db2450fc96f"} Dec 12 14:14:13 crc kubenswrapper[5113]: I1212 14:14:13.546244 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"789e28baf1d451e5956ad2db435c36d10a94afb9dd9b05a88139e47ba36e672c"} Dec 12 14:14:13 crc kubenswrapper[5113]: I1212 14:14:13.546686 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5336457e-56b8-4455-9cbb-388bab880a59" Dec 12 14:14:13 crc kubenswrapper[5113]: I1212 14:14:13.546703 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5336457e-56b8-4455-9cbb-388bab880a59" Dec 12 14:14:13 crc kubenswrapper[5113]: E1212 14:14:13.547186 5113 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:13 crc kubenswrapper[5113]: I1212 14:14:13.547407 5113 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:13 crc kubenswrapper[5113]: I1212 14:14:13.547622 5113 status_manager.go:895] "Failed to get status for pod" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 12 14:14:13 crc kubenswrapper[5113]: E1212 14:14:13.857248 5113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18807d507830cd6b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 14:13:58.496554347 +0000 UTC m=+221.331804174,LastTimestamp:2025-12-12 14:13:58.496554347 +0000 UTC m=+221.331804174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 14:14:14 crc kubenswrapper[5113]: I1212 14:14:14.559197 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"cad5ce28e789769da86bee23cd016412c9516c899de2b0f769b0060ba431f08f"} Dec 12 14:14:14 crc kubenswrapper[5113]: I1212 14:14:14.559601 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"4afcd2d7508c2e5f1ad65c57e32e6aafdf85ae045ed5edae210af0528121b9e1"} Dec 12 14:14:14 crc kubenswrapper[5113]: I1212 14:14:14.559625 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"40c633c5f22f3f64e2e94e661ec886a65e5c3a33eb187de80989fbf66fad396d"} Dec 12 14:14:15 crc kubenswrapper[5113]: I1212 14:14:15.570275 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"984b83c6716ac1ebc4813982cd5639fd1b3c4132bfba8ef74bb00beec8f3a0a6"} Dec 12 14:14:15 crc kubenswrapper[5113]: I1212 14:14:15.570357 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3aa7f6c0ce72a733aff75f4a08902905dfa9c9b33e116b30cb62c45625ab3256"} Dec 12 14:14:15 crc kubenswrapper[5113]: I1212 14:14:15.570458 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:15 crc kubenswrapper[5113]: I1212 14:14:15.570598 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5336457e-56b8-4455-9cbb-388bab880a59" Dec 12 14:14:15 crc kubenswrapper[5113]: I1212 14:14:15.570631 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5336457e-56b8-4455-9cbb-388bab880a59" Dec 12 14:14:17 crc kubenswrapper[5113]: I1212 14:14:17.488920 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" podUID="c55aed1a-bd22-4591-9394-247b0dbca87d" containerName="oauth-openshift" containerID="cri-o://bf8108b919ee02d8d18685ab371f7485d71576379cf60348ea609c707f03be4a" gracePeriod=15 Dec 12 14:14:17 crc kubenswrapper[5113]: I1212 14:14:17.502405 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:17 crc kubenswrapper[5113]: I1212 14:14:17.502467 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:17 crc kubenswrapper[5113]: I1212 14:14:17.512030 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:17 crc kubenswrapper[5113]: I1212 14:14:17.899706 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.041464 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c55aed1a-bd22-4591-9394-247b0dbca87d-audit-dir\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.041545 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-service-ca\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.041575 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-router-certs\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.041626 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-error\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.041651 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c55aed1a-bd22-4591-9394-247b0dbca87d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.041664 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w8z7\" (UniqueName: \"kubernetes.io/projected/c55aed1a-bd22-4591-9394-247b0dbca87d-kube-api-access-8w8z7\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.041695 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-cliconfig\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.041746 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-trusted-ca-bundle\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.041840 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-audit-policies\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.041905 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-serving-cert\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.041948 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-login\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.041970 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-ocp-branding-template\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.041996 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-session\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.042023 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-provider-selection\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.042053 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-idp-0-file-data\") pod \"c55aed1a-bd22-4591-9394-247b0dbca87d\" (UID: \"c55aed1a-bd22-4591-9394-247b0dbca87d\") " Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.043044 5113 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c55aed1a-bd22-4591-9394-247b0dbca87d-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.043090 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.053186 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.053229 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.053772 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.054663 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.061258 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.061548 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.062270 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.062482 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.062645 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.062778 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.069331 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c55aed1a-bd22-4591-9394-247b0dbca87d-kube-api-access-8w8z7" (OuterVolumeSpecName: "kube-api-access-8w8z7") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "kube-api-access-8w8z7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.069757 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c55aed1a-bd22-4591-9394-247b0dbca87d" (UID: "c55aed1a-bd22-4591-9394-247b0dbca87d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.144574 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.144620 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.144637 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.144650 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.144664 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.144681 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.144692 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.144700 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.144708 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.144717 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8w8z7\" (UniqueName: \"kubernetes.io/projected/c55aed1a-bd22-4591-9394-247b0dbca87d-kube-api-access-8w8z7\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.144726 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.144734 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.144744 5113 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c55aed1a-bd22-4591-9394-247b0dbca87d-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.594244 5113 generic.go:358] "Generic (PLEG): container finished" podID="c55aed1a-bd22-4591-9394-247b0dbca87d" containerID="bf8108b919ee02d8d18685ab371f7485d71576379cf60348ea609c707f03be4a" exitCode=0 Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.594446 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" event={"ID":"c55aed1a-bd22-4591-9394-247b0dbca87d","Type":"ContainerDied","Data":"bf8108b919ee02d8d18685ab371f7485d71576379cf60348ea609c707f03be4a"} Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.594480 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" event={"ID":"c55aed1a-bd22-4591-9394-247b0dbca87d","Type":"ContainerDied","Data":"92bd33b4313f287cc063de0fbe6c6c812e4a115dc3df17dd810b55d1dfd99351"} Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.594498 5113 scope.go:117] "RemoveContainer" containerID="bf8108b919ee02d8d18685ab371f7485d71576379cf60348ea609c707f03be4a" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.594702 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-2t5sb" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.614235 5113 scope.go:117] "RemoveContainer" containerID="bf8108b919ee02d8d18685ab371f7485d71576379cf60348ea609c707f03be4a" Dec 12 14:14:18 crc kubenswrapper[5113]: E1212 14:14:18.614712 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8108b919ee02d8d18685ab371f7485d71576379cf60348ea609c707f03be4a\": container with ID starting with bf8108b919ee02d8d18685ab371f7485d71576379cf60348ea609c707f03be4a not found: ID does not exist" containerID="bf8108b919ee02d8d18685ab371f7485d71576379cf60348ea609c707f03be4a" Dec 12 14:14:18 crc kubenswrapper[5113]: I1212 14:14:18.614750 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8108b919ee02d8d18685ab371f7485d71576379cf60348ea609c707f03be4a"} err="failed to get container status \"bf8108b919ee02d8d18685ab371f7485d71576379cf60348ea609c707f03be4a\": rpc error: code = NotFound desc = could not find container \"bf8108b919ee02d8d18685ab371f7485d71576379cf60348ea609c707f03be4a\": container with ID starting with bf8108b919ee02d8d18685ab371f7485d71576379cf60348ea609c707f03be4a not found: ID does not exist" Dec 12 14:14:19 crc kubenswrapper[5113]: I1212 14:14:19.436543 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:14:19 crc kubenswrapper[5113]: I1212 14:14:19.437021 5113 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 14:14:19 crc kubenswrapper[5113]: I1212 14:14:19.437159 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 14:14:19 crc kubenswrapper[5113]: I1212 14:14:19.583893 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:14:20 crc kubenswrapper[5113]: I1212 14:14:20.587786 5113 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:20 crc kubenswrapper[5113]: I1212 14:14:20.587830 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:20 crc kubenswrapper[5113]: I1212 14:14:20.682452 5113 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="fbd43c5f-3b9e-41f1-a1e3-37760078d62b" Dec 12 14:14:20 crc kubenswrapper[5113]: I1212 14:14:20.902360 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:14:20 crc kubenswrapper[5113]: I1212 14:14:20.902446 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:14:21 crc kubenswrapper[5113]: E1212 14:14:21.075822 5113 reflector.go:200] "Failed to watch" err="configmaps \"audit\" is forbidden: User \"system:node:crc\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-authentication\"/\"audit\"" type="*v1.ConfigMap" Dec 12 14:14:21 crc kubenswrapper[5113]: E1212 14:14:21.425265 5113 reflector.go:200] "Failed to watch" err="secrets \"v4-0-config-user-idp-0-file-data\" is forbidden: User \"system:node:crc\" cannot watch resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" type="*v1.Secret" Dec 12 14:14:21 crc kubenswrapper[5113]: I1212 14:14:21.627431 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5336457e-56b8-4455-9cbb-388bab880a59" Dec 12 14:14:21 crc kubenswrapper[5113]: I1212 14:14:21.627480 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5336457e-56b8-4455-9cbb-388bab880a59" Dec 12 14:14:21 crc kubenswrapper[5113]: I1212 14:14:21.629823 5113 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="fbd43c5f-3b9e-41f1-a1e3-37760078d62b" Dec 12 14:14:21 crc kubenswrapper[5113]: I1212 14:14:21.632044 5113 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://40c633c5f22f3f64e2e94e661ec886a65e5c3a33eb187de80989fbf66fad396d" Dec 12 14:14:21 crc kubenswrapper[5113]: I1212 14:14:21.632069 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:22 crc kubenswrapper[5113]: I1212 14:14:22.624838 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5336457e-56b8-4455-9cbb-388bab880a59" Dec 12 14:14:22 crc kubenswrapper[5113]: I1212 14:14:22.625182 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5336457e-56b8-4455-9cbb-388bab880a59" Dec 12 14:14:22 crc kubenswrapper[5113]: I1212 14:14:22.627937 5113 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="fbd43c5f-3b9e-41f1-a1e3-37760078d62b" Dec 12 14:14:29 crc kubenswrapper[5113]: I1212 14:14:29.442229 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:14:29 crc kubenswrapper[5113]: I1212 14:14:29.447160 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 14:14:30 crc kubenswrapper[5113]: I1212 14:14:30.117803 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 12 14:14:30 crc kubenswrapper[5113]: I1212 14:14:30.232390 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 12 14:14:30 crc kubenswrapper[5113]: I1212 14:14:30.768089 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 12 14:14:30 crc kubenswrapper[5113]: I1212 14:14:30.788431 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 12 14:14:30 crc kubenswrapper[5113]: I1212 14:14:30.845482 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 14:14:30 crc kubenswrapper[5113]: I1212 14:14:30.972222 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 14:14:31 crc kubenswrapper[5113]: I1212 14:14:31.092230 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 12 14:14:31 crc kubenswrapper[5113]: I1212 14:14:31.734313 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 12 14:14:31 crc kubenswrapper[5113]: I1212 14:14:31.736260 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 12 14:14:31 crc kubenswrapper[5113]: I1212 14:14:31.944767 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 12 14:14:32 crc kubenswrapper[5113]: I1212 14:14:32.215689 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 12 14:14:32 crc kubenswrapper[5113]: I1212 14:14:32.364061 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 12 14:14:32 crc kubenswrapper[5113]: I1212 14:14:32.670928 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:14:32 crc kubenswrapper[5113]: I1212 14:14:32.677217 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 14:14:32 crc kubenswrapper[5113]: I1212 14:14:32.922240 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.078059 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.155849 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.208967 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.210983 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.377388 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.505011 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.531757 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.576288 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.672287 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.703333 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.723037 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.760844 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.869963 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.878472 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.884284 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 12 14:14:33 crc kubenswrapper[5113]: I1212 14:14:33.908143 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.065774 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.105824 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.139539 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.154389 5113 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.190832 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.191212 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.209159 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.274388 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.319613 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.358722 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.441289 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.446915 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.536771 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.552274 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.623804 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.648881 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.677097 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.771652 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.813444 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.815489 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.837456 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.896192 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.916496 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 14:14:34 crc kubenswrapper[5113]: I1212 14:14:34.986912 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.052523 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.085067 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.138847 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.163535 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.335882 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.528690 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.579144 5113 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.588080 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.606967 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.610212 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.782637 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.783017 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.789625 5113 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.792225 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.792548 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.818542 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.854357 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.869166 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.891267 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.897985 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.953413 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 14:14:35 crc kubenswrapper[5113]: I1212 14:14:35.994176 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.064625 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.114112 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.167509 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.193205 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.210981 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.350103 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.501464 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.626100 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.709361 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.737786 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.810674 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.886941 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.910482 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.915352 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.968645 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 12 14:14:36 crc kubenswrapper[5113]: I1212 14:14:36.980454 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.004171 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.090836 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.123310 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.140971 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.254930 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.337908 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.341690 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.420002 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.502623 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.605806 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.631328 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.713154 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.720638 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.779872 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.833014 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.869504 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.892327 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 12 14:14:37 crc kubenswrapper[5113]: I1212 14:14:37.924663 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.004162 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.082966 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.108453 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.119159 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.119180 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.194710 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.330413 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.375395 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.391923 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.404809 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.434381 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.450226 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.519114 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.555951 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.557322 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.592554 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.594732 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.699010 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.778426 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.798396 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.852405 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.935034 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 12 14:14:38 crc kubenswrapper[5113]: I1212 14:14:38.995908 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.063016 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.112923 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.199154 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.234361 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.287410 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.406951 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.493861 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.503344 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.523648 5113 ???:1] "http: TLS handshake error from 192.168.126.11:43646: no serving certificate available for the kubelet" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.524188 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.529971 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.620474 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.778780 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.843290 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.846282 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.855190 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 12 14:14:39 crc kubenswrapper[5113]: I1212 14:14:39.942845 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.050611 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.121824 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.248372 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.256378 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.420034 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.505076 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.510170 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.608430 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.626480 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.638618 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.638620 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.646112 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.717349 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.908075 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 12 14:14:40 crc kubenswrapper[5113]: I1212 14:14:40.935228 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.019686 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.244988 5113 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.249545 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-66458b6674-2t5sb"] Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.249607 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-64fdd788dd-v58ff","openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.250332 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c55aed1a-bd22-4591-9394-247b0dbca87d" containerName="oauth-openshift" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.250356 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55aed1a-bd22-4591-9394-247b0dbca87d" containerName="oauth-openshift" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.250380 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" containerName="installer" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.250388 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" containerName="installer" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.250534 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="c55aed1a-bd22-4591-9394-247b0dbca87d" containerName="oauth-openshift" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.250554 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="a11e3c14-75a9-4f84-b320-13b8f4cd509e" containerName="installer" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.250824 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5336457e-56b8-4455-9cbb-388bab880a59" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.250850 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5336457e-56b8-4455-9cbb-388bab880a59" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.306583 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.306884 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.309670 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.309807 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.309861 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.309933 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.309973 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.310071 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.310096 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.310458 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.310535 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.310679 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.315980 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.316539 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.316918 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.319458 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.324924 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.334207 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.334190558 podStartE2EDuration="21.334190558s" podCreationTimestamp="2025-12-12 14:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:14:41.332882396 +0000 UTC m=+264.168132243" watchObservedRunningTime="2025-12-12 14:14:41.334190558 +0000 UTC m=+264.169440385" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.346555 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.355596 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.355642 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.355687 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.355721 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.355751 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-service-ca\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.355770 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.355789 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-user-template-error\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.355834 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-router-certs\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.355867 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-session\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.355894 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/23871358-eea1-4e1e-a5fc-c09251530e29-audit-policies\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.355915 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23871358-eea1-4e1e-a5fc-c09251530e29-audit-dir\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.355952 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vc9t\" (UniqueName: \"kubernetes.io/projected/23871358-eea1-4e1e-a5fc-c09251530e29-kube-api-access-9vc9t\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.355979 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-user-template-login\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.356029 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.457257 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-session\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.457895 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/23871358-eea1-4e1e-a5fc-c09251530e29-audit-policies\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.458028 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23871358-eea1-4e1e-a5fc-c09251530e29-audit-dir\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.458166 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23871358-eea1-4e1e-a5fc-c09251530e29-audit-dir\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.458187 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9vc9t\" (UniqueName: \"kubernetes.io/projected/23871358-eea1-4e1e-a5fc-c09251530e29-kube-api-access-9vc9t\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.458266 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-user-template-login\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.458353 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.458457 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.458478 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.458509 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.458581 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.458627 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-service-ca\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.458629 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/23871358-eea1-4e1e-a5fc-c09251530e29-audit-policies\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.458646 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.459409 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-user-template-error\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.459463 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-router-certs\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.459919 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.460349 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-service-ca\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.461108 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.465470 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-user-template-error\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.465731 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-user-template-login\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.466154 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.466623 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-session\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.467946 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.468306 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.472618 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-router-certs\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.472651 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/23871358-eea1-4e1e-a5fc-c09251530e29-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.474788 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.475088 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.479158 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vc9t\" (UniqueName: \"kubernetes.io/projected/23871358-eea1-4e1e-a5fc-c09251530e29-kube-api-access-9vc9t\") pod \"oauth-openshift-64fdd788dd-v58ff\" (UID: \"23871358-eea1-4e1e-a5fc-c09251530e29\") " pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.490510 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c55aed1a-bd22-4591-9394-247b0dbca87d" path="/var/lib/kubelet/pods/c55aed1a-bd22-4591-9394-247b0dbca87d/volumes" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.534456 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.567824 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.580578 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.628971 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.667977 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.879511 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64fdd788dd-v58ff"] Dec 12 14:14:41 crc kubenswrapper[5113]: I1212 14:14:41.963968 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.067183 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.146721 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.318605 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.342797 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.342898 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.409285 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.427370 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.444269 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.486931 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.527311 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.684528 5113 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.715286 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.838659 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" event={"ID":"23871358-eea1-4e1e-a5fc-c09251530e29","Type":"ContainerStarted","Data":"9388fa5a1bc51d9ff65acb3cf14b51106b4d8c83a2e6fb21e2f7bad01b9e38ba"} Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.838721 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" event={"ID":"23871358-eea1-4e1e-a5fc-c09251530e29","Type":"ContainerStarted","Data":"215924dd4cc1f1fb099c3464dd3ce76d12b7d38c8deb3bec16c3afd6678938f3"} Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.839357 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.847483 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.860243 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-64fdd788dd-v58ff" podStartSLOduration=50.860222269 podStartE2EDuration="50.860222269s" podCreationTimestamp="2025-12-12 14:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:14:42.858470582 +0000 UTC m=+265.693720439" watchObservedRunningTime="2025-12-12 14:14:42.860222269 +0000 UTC m=+265.695472116" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.908008 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.917876 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:42 crc kubenswrapper[5113]: I1212 14:14:42.950988 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.152460 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.200803 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.225152 5113 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.227204 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://e083dfd03c1dcce8c7b726552b005f3477f946677061476a5ae74f049bd534d1" gracePeriod=5 Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.269845 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.275350 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.325276 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.341021 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.425493 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.516608 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.520095 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.578517 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.586971 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.604069 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.717492 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.908944 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.942230 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 14:14:43 crc kubenswrapper[5113]: I1212 14:14:43.984347 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 12 14:14:44 crc kubenswrapper[5113]: I1212 14:14:44.117294 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 12 14:14:44 crc kubenswrapper[5113]: I1212 14:14:44.305627 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 12 14:14:44 crc kubenswrapper[5113]: I1212 14:14:44.364636 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 12 14:14:44 crc kubenswrapper[5113]: I1212 14:14:44.446714 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 12 14:14:44 crc kubenswrapper[5113]: I1212 14:14:44.482266 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 14:14:44 crc kubenswrapper[5113]: I1212 14:14:44.798095 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 14:14:44 crc kubenswrapper[5113]: I1212 14:14:44.842724 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 12 14:14:44 crc kubenswrapper[5113]: I1212 14:14:44.963668 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 12 14:14:44 crc kubenswrapper[5113]: I1212 14:14:44.986388 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 12 14:14:45 crc kubenswrapper[5113]: I1212 14:14:45.211945 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 12 14:14:45 crc kubenswrapper[5113]: I1212 14:14:45.266807 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 12 14:14:45 crc kubenswrapper[5113]: I1212 14:14:45.367608 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:45 crc kubenswrapper[5113]: I1212 14:14:45.397564 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 14:14:45 crc kubenswrapper[5113]: I1212 14:14:45.434026 5113 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 14:14:45 crc kubenswrapper[5113]: I1212 14:14:45.608080 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 12 14:14:45 crc kubenswrapper[5113]: I1212 14:14:45.931109 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 12 14:14:45 crc kubenswrapper[5113]: I1212 14:14:45.967487 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 14:14:45 crc kubenswrapper[5113]: I1212 14:14:45.982814 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 14:14:46 crc kubenswrapper[5113]: I1212 14:14:46.224452 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 14:14:46 crc kubenswrapper[5113]: I1212 14:14:46.524339 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 12 14:14:46 crc kubenswrapper[5113]: I1212 14:14:46.775276 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 12 14:14:46 crc kubenswrapper[5113]: I1212 14:14:46.992694 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.880858 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.881241 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.883904 5113 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.887054 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.887140 5113 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="e083dfd03c1dcce8c7b726552b005f3477f946677061476a5ae74f049bd534d1" exitCode=137 Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.887243 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.887306 5113 scope.go:117] "RemoveContainer" containerID="e083dfd03c1dcce8c7b726552b005f3477f946677061476a5ae74f049bd534d1" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.906223 5113 scope.go:117] "RemoveContainer" containerID="e083dfd03c1dcce8c7b726552b005f3477f946677061476a5ae74f049bd534d1" Dec 12 14:14:48 crc kubenswrapper[5113]: E1212 14:14:48.906711 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e083dfd03c1dcce8c7b726552b005f3477f946677061476a5ae74f049bd534d1\": container with ID starting with e083dfd03c1dcce8c7b726552b005f3477f946677061476a5ae74f049bd534d1 not found: ID does not exist" containerID="e083dfd03c1dcce8c7b726552b005f3477f946677061476a5ae74f049bd534d1" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.906772 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e083dfd03c1dcce8c7b726552b005f3477f946677061476a5ae74f049bd534d1"} err="failed to get container status \"e083dfd03c1dcce8c7b726552b005f3477f946677061476a5ae74f049bd534d1\": rpc error: code = NotFound desc = could not find container \"e083dfd03c1dcce8c7b726552b005f3477f946677061476a5ae74f049bd534d1\": container with ID starting with e083dfd03c1dcce8c7b726552b005f3477f946677061476a5ae74f049bd534d1 not found: ID does not exist" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.962590 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.962664 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.962714 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.962761 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.962799 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.962824 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.962814 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.962868 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.962872 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.963255 5113 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.963294 5113 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.963314 5113 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.963331 5113 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:48 crc kubenswrapper[5113]: I1212 14:14:48.976815 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:14:49 crc kubenswrapper[5113]: I1212 14:14:49.064774 5113 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 14:14:49 crc kubenswrapper[5113]: I1212 14:14:49.210546 5113 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 14:14:49 crc kubenswrapper[5113]: I1212 14:14:49.489826 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 12 14:14:50 crc kubenswrapper[5113]: I1212 14:14:50.901637 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:14:50 crc kubenswrapper[5113]: I1212 14:14:50.901933 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:14:54 crc kubenswrapper[5113]: I1212 14:14:54.740559 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.176476 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b"] Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.177428 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.177443 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.177565 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.236762 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b"] Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.236926 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.238839 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.238940 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.331188 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h5zz\" (UniqueName: \"kubernetes.io/projected/26652227-8cb3-4c83-8fea-9cb56cdc660f-kube-api-access-9h5zz\") pod \"collect-profiles-29425815-lf52b\" (UID: \"26652227-8cb3-4c83-8fea-9cb56cdc660f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.331295 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26652227-8cb3-4c83-8fea-9cb56cdc660f-config-volume\") pod \"collect-profiles-29425815-lf52b\" (UID: \"26652227-8cb3-4c83-8fea-9cb56cdc660f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.331336 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26652227-8cb3-4c83-8fea-9cb56cdc660f-secret-volume\") pod \"collect-profiles-29425815-lf52b\" (UID: \"26652227-8cb3-4c83-8fea-9cb56cdc660f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.433058 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26652227-8cb3-4c83-8fea-9cb56cdc660f-config-volume\") pod \"collect-profiles-29425815-lf52b\" (UID: \"26652227-8cb3-4c83-8fea-9cb56cdc660f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.433218 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26652227-8cb3-4c83-8fea-9cb56cdc660f-secret-volume\") pod \"collect-profiles-29425815-lf52b\" (UID: \"26652227-8cb3-4c83-8fea-9cb56cdc660f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.433286 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9h5zz\" (UniqueName: \"kubernetes.io/projected/26652227-8cb3-4c83-8fea-9cb56cdc660f-kube-api-access-9h5zz\") pod \"collect-profiles-29425815-lf52b\" (UID: \"26652227-8cb3-4c83-8fea-9cb56cdc660f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.434250 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26652227-8cb3-4c83-8fea-9cb56cdc660f-config-volume\") pod \"collect-profiles-29425815-lf52b\" (UID: \"26652227-8cb3-4c83-8fea-9cb56cdc660f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.440027 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26652227-8cb3-4c83-8fea-9cb56cdc660f-secret-volume\") pod \"collect-profiles-29425815-lf52b\" (UID: \"26652227-8cb3-4c83-8fea-9cb56cdc660f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.451661 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h5zz\" (UniqueName: \"kubernetes.io/projected/26652227-8cb3-4c83-8fea-9cb56cdc660f-kube-api-access-9h5zz\") pod \"collect-profiles-29425815-lf52b\" (UID: \"26652227-8cb3-4c83-8fea-9cb56cdc660f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.558355 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.751159 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b"] Dec 12 14:15:00 crc kubenswrapper[5113]: W1212 14:15:00.757850 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26652227_8cb3_4c83_8fea_9cb56cdc660f.slice/crio-561f7fb3558778d85f423e1998aa02a16b90fdc5eea9d582cf444b3c26e2889a WatchSource:0}: Error finding container 561f7fb3558778d85f423e1998aa02a16b90fdc5eea9d582cf444b3c26e2889a: Status 404 returned error can't find the container with id 561f7fb3558778d85f423e1998aa02a16b90fdc5eea9d582cf444b3c26e2889a Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.966786 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" event={"ID":"26652227-8cb3-4c83-8fea-9cb56cdc660f","Type":"ContainerStarted","Data":"0c3d0c383437933fbb69956176207f3b6643ece0d049bf71e2e6e27b0cd4f2bc"} Dec 12 14:15:00 crc kubenswrapper[5113]: I1212 14:15:00.966874 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" event={"ID":"26652227-8cb3-4c83-8fea-9cb56cdc660f","Type":"ContainerStarted","Data":"561f7fb3558778d85f423e1998aa02a16b90fdc5eea9d582cf444b3c26e2889a"} Dec 12 14:15:01 crc kubenswrapper[5113]: I1212 14:15:01.972932 5113 generic.go:358] "Generic (PLEG): container finished" podID="26652227-8cb3-4c83-8fea-9cb56cdc660f" containerID="0c3d0c383437933fbb69956176207f3b6643ece0d049bf71e2e6e27b0cd4f2bc" exitCode=0 Dec 12 14:15:01 crc kubenswrapper[5113]: I1212 14:15:01.972992 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" event={"ID":"26652227-8cb3-4c83-8fea-9cb56cdc660f","Type":"ContainerDied","Data":"0c3d0c383437933fbb69956176207f3b6643ece0d049bf71e2e6e27b0cd4f2bc"} Dec 12 14:15:03 crc kubenswrapper[5113]: I1212 14:15:03.254545 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" Dec 12 14:15:03 crc kubenswrapper[5113]: I1212 14:15:03.291380 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h5zz\" (UniqueName: \"kubernetes.io/projected/26652227-8cb3-4c83-8fea-9cb56cdc660f-kube-api-access-9h5zz\") pod \"26652227-8cb3-4c83-8fea-9cb56cdc660f\" (UID: \"26652227-8cb3-4c83-8fea-9cb56cdc660f\") " Dec 12 14:15:03 crc kubenswrapper[5113]: I1212 14:15:03.291462 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26652227-8cb3-4c83-8fea-9cb56cdc660f-secret-volume\") pod \"26652227-8cb3-4c83-8fea-9cb56cdc660f\" (UID: \"26652227-8cb3-4c83-8fea-9cb56cdc660f\") " Dec 12 14:15:03 crc kubenswrapper[5113]: I1212 14:15:03.291523 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26652227-8cb3-4c83-8fea-9cb56cdc660f-config-volume\") pod \"26652227-8cb3-4c83-8fea-9cb56cdc660f\" (UID: \"26652227-8cb3-4c83-8fea-9cb56cdc660f\") " Dec 12 14:15:03 crc kubenswrapper[5113]: I1212 14:15:03.292956 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26652227-8cb3-4c83-8fea-9cb56cdc660f-config-volume" (OuterVolumeSpecName: "config-volume") pod "26652227-8cb3-4c83-8fea-9cb56cdc660f" (UID: "26652227-8cb3-4c83-8fea-9cb56cdc660f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:03 crc kubenswrapper[5113]: I1212 14:15:03.304911 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26652227-8cb3-4c83-8fea-9cb56cdc660f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "26652227-8cb3-4c83-8fea-9cb56cdc660f" (UID: "26652227-8cb3-4c83-8fea-9cb56cdc660f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:15:03 crc kubenswrapper[5113]: I1212 14:15:03.305694 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26652227-8cb3-4c83-8fea-9cb56cdc660f-kube-api-access-9h5zz" (OuterVolumeSpecName: "kube-api-access-9h5zz") pod "26652227-8cb3-4c83-8fea-9cb56cdc660f" (UID: "26652227-8cb3-4c83-8fea-9cb56cdc660f"). InnerVolumeSpecName "kube-api-access-9h5zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:03 crc kubenswrapper[5113]: I1212 14:15:03.321274 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 12 14:15:03 crc kubenswrapper[5113]: I1212 14:15:03.393018 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26652227-8cb3-4c83-8fea-9cb56cdc660f-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:03 crc kubenswrapper[5113]: I1212 14:15:03.393070 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9h5zz\" (UniqueName: \"kubernetes.io/projected/26652227-8cb3-4c83-8fea-9cb56cdc660f-kube-api-access-9h5zz\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:03 crc kubenswrapper[5113]: I1212 14:15:03.393082 5113 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26652227-8cb3-4c83-8fea-9cb56cdc660f-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:04 crc kubenswrapper[5113]: I1212 14:15:04.016920 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" Dec 12 14:15:04 crc kubenswrapper[5113]: I1212 14:15:04.016958 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425815-lf52b" event={"ID":"26652227-8cb3-4c83-8fea-9cb56cdc660f","Type":"ContainerDied","Data":"561f7fb3558778d85f423e1998aa02a16b90fdc5eea9d582cf444b3c26e2889a"} Dec 12 14:15:04 crc kubenswrapper[5113]: I1212 14:15:04.017006 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="561f7fb3558778d85f423e1998aa02a16b90fdc5eea9d582cf444b3c26e2889a" Dec 12 14:15:04 crc kubenswrapper[5113]: I1212 14:15:04.166412 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 12 14:15:07 crc kubenswrapper[5113]: I1212 14:15:07.936175 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 12 14:15:08 crc kubenswrapper[5113]: I1212 14:15:08.645939 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-c6kfp"] Dec 12 14:15:08 crc kubenswrapper[5113]: I1212 14:15:08.646257 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" podUID="ec404d0b-005f-4f01-80db-d36605948e5c" containerName="controller-manager" containerID="cri-o://fae526a711fa062b32fdd7f3fb87b51a568b046de252e0f347e69074cf5e32f9" gracePeriod=30 Dec 12 14:15:08 crc kubenswrapper[5113]: I1212 14:15:08.667958 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj"] Dec 12 14:15:08 crc kubenswrapper[5113]: I1212 14:15:08.668246 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" podUID="f07154ae-c1f9-42af-9327-e211d7199c82" containerName="route-controller-manager" containerID="cri-o://8f0433fc7837b97c7eb6ae75c689bcc6b094dedee723959463aec7c55bcff28b" gracePeriod=30 Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.020925 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.027596 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.050214 5113 generic.go:358] "Generic (PLEG): container finished" podID="f07154ae-c1f9-42af-9327-e211d7199c82" containerID="8f0433fc7837b97c7eb6ae75c689bcc6b094dedee723959463aec7c55bcff28b" exitCode=0 Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.050253 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" event={"ID":"f07154ae-c1f9-42af-9327-e211d7199c82","Type":"ContainerDied","Data":"8f0433fc7837b97c7eb6ae75c689bcc6b094dedee723959463aec7c55bcff28b"} Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.050302 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" event={"ID":"f07154ae-c1f9-42af-9327-e211d7199c82","Type":"ContainerDied","Data":"b7e2b9f82bef06388beef1d226a2b4c6cb3aa9b23bebadc36b56e6f5c7311f2e"} Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.050325 5113 scope.go:117] "RemoveContainer" containerID="8f0433fc7837b97c7eb6ae75c689bcc6b094dedee723959463aec7c55bcff28b" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.050326 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.053476 5113 generic.go:358] "Generic (PLEG): container finished" podID="ec404d0b-005f-4f01-80db-d36605948e5c" containerID="fae526a711fa062b32fdd7f3fb87b51a568b046de252e0f347e69074cf5e32f9" exitCode=0 Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.053590 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" event={"ID":"ec404d0b-005f-4f01-80db-d36605948e5c","Type":"ContainerDied","Data":"fae526a711fa062b32fdd7f3fb87b51a568b046de252e0f347e69074cf5e32f9"} Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.053637 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" event={"ID":"ec404d0b-005f-4f01-80db-d36605948e5c","Type":"ContainerDied","Data":"4e595f1dfbbffface8699c5529df1b36a5020e03a41978f352fbc40d14865898"} Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.053813 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-c6kfp" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.054828 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc"] Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.055721 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f07154ae-c1f9-42af-9327-e211d7199c82" containerName="route-controller-manager" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.055763 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07154ae-c1f9-42af-9327-e211d7199c82" containerName="route-controller-manager" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.055779 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec404d0b-005f-4f01-80db-d36605948e5c" containerName="controller-manager" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.055787 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec404d0b-005f-4f01-80db-d36605948e5c" containerName="controller-manager" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.055848 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="26652227-8cb3-4c83-8fea-9cb56cdc660f" containerName="collect-profiles" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.055858 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="26652227-8cb3-4c83-8fea-9cb56cdc660f" containerName="collect-profiles" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.056031 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f07154ae-c1f9-42af-9327-e211d7199c82" containerName="route-controller-manager" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.056048 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec404d0b-005f-4f01-80db-d36605948e5c" containerName="controller-manager" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.056058 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="26652227-8cb3-4c83-8fea-9cb56cdc660f" containerName="collect-profiles" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.065902 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.069191 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc"] Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.078185 5113 scope.go:117] "RemoveContainer" containerID="8f0433fc7837b97c7eb6ae75c689bcc6b094dedee723959463aec7c55bcff28b" Dec 12 14:15:09 crc kubenswrapper[5113]: E1212 14:15:09.083654 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f0433fc7837b97c7eb6ae75c689bcc6b094dedee723959463aec7c55bcff28b\": container with ID starting with 8f0433fc7837b97c7eb6ae75c689bcc6b094dedee723959463aec7c55bcff28b not found: ID does not exist" containerID="8f0433fc7837b97c7eb6ae75c689bcc6b094dedee723959463aec7c55bcff28b" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.083721 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f0433fc7837b97c7eb6ae75c689bcc6b094dedee723959463aec7c55bcff28b"} err="failed to get container status \"8f0433fc7837b97c7eb6ae75c689bcc6b094dedee723959463aec7c55bcff28b\": rpc error: code = NotFound desc = could not find container \"8f0433fc7837b97c7eb6ae75c689bcc6b094dedee723959463aec7c55bcff28b\": container with ID starting with 8f0433fc7837b97c7eb6ae75c689bcc6b094dedee723959463aec7c55bcff28b not found: ID does not exist" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.083777 5113 scope.go:117] "RemoveContainer" containerID="fae526a711fa062b32fdd7f3fb87b51a568b046de252e0f347e69074cf5e32f9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.090410 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.094157 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f07154ae-c1f9-42af-9327-e211d7199c82-tmp\") pod \"f07154ae-c1f9-42af-9327-e211d7199c82\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.094302 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f07154ae-c1f9-42af-9327-e211d7199c82-serving-cert\") pod \"f07154ae-c1f9-42af-9327-e211d7199c82\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.094418 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f07154ae-c1f9-42af-9327-e211d7199c82-client-ca\") pod \"f07154ae-c1f9-42af-9327-e211d7199c82\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.094538 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q78t6\" (UniqueName: \"kubernetes.io/projected/f07154ae-c1f9-42af-9327-e211d7199c82-kube-api-access-q78t6\") pod \"f07154ae-c1f9-42af-9327-e211d7199c82\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.094670 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-config\") pod \"ec404d0b-005f-4f01-80db-d36605948e5c\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.094795 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-proxy-ca-bundles\") pod \"ec404d0b-005f-4f01-80db-d36605948e5c\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.094933 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec404d0b-005f-4f01-80db-d36605948e5c-tmp\") pod \"ec404d0b-005f-4f01-80db-d36605948e5c\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.095032 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f07154ae-c1f9-42af-9327-e211d7199c82-config\") pod \"f07154ae-c1f9-42af-9327-e211d7199c82\" (UID: \"f07154ae-c1f9-42af-9327-e211d7199c82\") " Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.095535 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-client-ca\") pod \"ec404d0b-005f-4f01-80db-d36605948e5c\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.095693 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48786\" (UniqueName: \"kubernetes.io/projected/ec404d0b-005f-4f01-80db-d36605948e5c-kube-api-access-48786\") pod \"ec404d0b-005f-4f01-80db-d36605948e5c\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.095813 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec404d0b-005f-4f01-80db-d36605948e5c-serving-cert\") pod \"ec404d0b-005f-4f01-80db-d36605948e5c\" (UID: \"ec404d0b-005f-4f01-80db-d36605948e5c\") " Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.096311 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f07154ae-c1f9-42af-9327-e211d7199c82-tmp" (OuterVolumeSpecName: "tmp") pod "f07154ae-c1f9-42af-9327-e211d7199c82" (UID: "f07154ae-c1f9-42af-9327-e211d7199c82"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.096528 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec404d0b-005f-4f01-80db-d36605948e5c-tmp" (OuterVolumeSpecName: "tmp") pod "ec404d0b-005f-4f01-80db-d36605948e5c" (UID: "ec404d0b-005f-4f01-80db-d36605948e5c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.096700 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f07154ae-c1f9-42af-9327-e211d7199c82-client-ca" (OuterVolumeSpecName: "client-ca") pod "f07154ae-c1f9-42af-9327-e211d7199c82" (UID: "f07154ae-c1f9-42af-9327-e211d7199c82"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.097045 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f07154ae-c1f9-42af-9327-e211d7199c82-config" (OuterVolumeSpecName: "config") pod "f07154ae-c1f9-42af-9327-e211d7199c82" (UID: "f07154ae-c1f9-42af-9327-e211d7199c82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.097241 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-client-ca" (OuterVolumeSpecName: "client-ca") pod "ec404d0b-005f-4f01-80db-d36605948e5c" (UID: "ec404d0b-005f-4f01-80db-d36605948e5c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.098158 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ec404d0b-005f-4f01-80db-d36605948e5c" (UID: "ec404d0b-005f-4f01-80db-d36605948e5c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.098270 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-config" (OuterVolumeSpecName: "config") pod "ec404d0b-005f-4f01-80db-d36605948e5c" (UID: "ec404d0b-005f-4f01-80db-d36605948e5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.104095 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec404d0b-005f-4f01-80db-d36605948e5c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ec404d0b-005f-4f01-80db-d36605948e5c" (UID: "ec404d0b-005f-4f01-80db-d36605948e5c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.106846 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07154ae-c1f9-42af-9327-e211d7199c82-kube-api-access-q78t6" (OuterVolumeSpecName: "kube-api-access-q78t6") pod "f07154ae-c1f9-42af-9327-e211d7199c82" (UID: "f07154ae-c1f9-42af-9327-e211d7199c82"). InnerVolumeSpecName "kube-api-access-q78t6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.108480 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec404d0b-005f-4f01-80db-d36605948e5c-kube-api-access-48786" (OuterVolumeSpecName: "kube-api-access-48786") pod "ec404d0b-005f-4f01-80db-d36605948e5c" (UID: "ec404d0b-005f-4f01-80db-d36605948e5c"). InnerVolumeSpecName "kube-api-access-48786". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.108607 5113 scope.go:117] "RemoveContainer" containerID="fae526a711fa062b32fdd7f3fb87b51a568b046de252e0f347e69074cf5e32f9" Dec 12 14:15:09 crc kubenswrapper[5113]: E1212 14:15:09.109063 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fae526a711fa062b32fdd7f3fb87b51a568b046de252e0f347e69074cf5e32f9\": container with ID starting with fae526a711fa062b32fdd7f3fb87b51a568b046de252e0f347e69074cf5e32f9 not found: ID does not exist" containerID="fae526a711fa062b32fdd7f3fb87b51a568b046de252e0f347e69074cf5e32f9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.109095 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae526a711fa062b32fdd7f3fb87b51a568b046de252e0f347e69074cf5e32f9"} err="failed to get container status \"fae526a711fa062b32fdd7f3fb87b51a568b046de252e0f347e69074cf5e32f9\": rpc error: code = NotFound desc = could not find container \"fae526a711fa062b32fdd7f3fb87b51a568b046de252e0f347e69074cf5e32f9\": container with ID starting with fae526a711fa062b32fdd7f3fb87b51a568b046de252e0f347e69074cf5e32f9 not found: ID does not exist" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.116005 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9"] Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.116553 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07154ae-c1f9-42af-9327-e211d7199c82-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f07154ae-c1f9-42af-9327-e211d7199c82" (UID: "f07154ae-c1f9-42af-9327-e211d7199c82"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.121615 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.124592 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9"] Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.204437 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/652e830c-195f-43a7-9a39-9795c10831e0-config\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.204694 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-client-ca\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.204725 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-proxy-ca-bundles\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.204808 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/652e830c-195f-43a7-9a39-9795c10831e0-serving-cert\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.204846 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1828646e-b717-40d5-af1e-073589f081ce-tmp\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.204868 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1828646e-b717-40d5-af1e-073589f081ce-serving-cert\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.204890 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/652e830c-195f-43a7-9a39-9795c10831e0-tmp\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205000 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhl22\" (UniqueName: \"kubernetes.io/projected/652e830c-195f-43a7-9a39-9795c10831e0-kube-api-access-xhl22\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205059 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/652e830c-195f-43a7-9a39-9795c10831e0-client-ca\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205103 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdwpt\" (UniqueName: \"kubernetes.io/projected/1828646e-b717-40d5-af1e-073589f081ce-kube-api-access-jdwpt\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205146 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-config\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205299 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec404d0b-005f-4f01-80db-d36605948e5c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205325 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f07154ae-c1f9-42af-9327-e211d7199c82-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205338 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f07154ae-c1f9-42af-9327-e211d7199c82-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205350 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f07154ae-c1f9-42af-9327-e211d7199c82-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205362 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q78t6\" (UniqueName: \"kubernetes.io/projected/f07154ae-c1f9-42af-9327-e211d7199c82-kube-api-access-q78t6\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205374 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205386 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205397 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec404d0b-005f-4f01-80db-d36605948e5c-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205408 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f07154ae-c1f9-42af-9327-e211d7199c82-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205420 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec404d0b-005f-4f01-80db-d36605948e5c-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.205433 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-48786\" (UniqueName: \"kubernetes.io/projected/ec404d0b-005f-4f01-80db-d36605948e5c-kube-api-access-48786\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.306473 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-proxy-ca-bundles\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.306517 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/652e830c-195f-43a7-9a39-9795c10831e0-serving-cert\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.306560 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1828646e-b717-40d5-af1e-073589f081ce-tmp\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.306581 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1828646e-b717-40d5-af1e-073589f081ce-serving-cert\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.306601 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/652e830c-195f-43a7-9a39-9795c10831e0-tmp\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.306652 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xhl22\" (UniqueName: \"kubernetes.io/projected/652e830c-195f-43a7-9a39-9795c10831e0-kube-api-access-xhl22\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.306668 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/652e830c-195f-43a7-9a39-9795c10831e0-client-ca\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.306685 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jdwpt\" (UniqueName: \"kubernetes.io/projected/1828646e-b717-40d5-af1e-073589f081ce-kube-api-access-jdwpt\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.306701 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-config\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.306722 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/652e830c-195f-43a7-9a39-9795c10831e0-config\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.306740 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-client-ca\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.307523 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/652e830c-195f-43a7-9a39-9795c10831e0-tmp\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.307764 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-proxy-ca-bundles\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.307893 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/652e830c-195f-43a7-9a39-9795c10831e0-client-ca\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.307916 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/652e830c-195f-43a7-9a39-9795c10831e0-config\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.308016 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-client-ca\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.308420 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1828646e-b717-40d5-af1e-073589f081ce-tmp\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.309037 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-config\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.311652 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1828646e-b717-40d5-af1e-073589f081ce-serving-cert\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.311652 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/652e830c-195f-43a7-9a39-9795c10831e0-serving-cert\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.324859 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdwpt\" (UniqueName: \"kubernetes.io/projected/1828646e-b717-40d5-af1e-073589f081ce-kube-api-access-jdwpt\") pod \"controller-manager-5b6495d9d9-s8sv9\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.325662 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhl22\" (UniqueName: \"kubernetes.io/projected/652e830c-195f-43a7-9a39-9795c10831e0-kube-api-access-xhl22\") pod \"route-controller-manager-8cc84d76-zwhxc\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.385491 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj"] Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.390303 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wh8pj"] Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.396193 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.398542 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-c6kfp"] Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.403496 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-c6kfp"] Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.439352 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.495291 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec404d0b-005f-4f01-80db-d36605948e5c" path="/var/lib/kubelet/pods/ec404d0b-005f-4f01-80db-d36605948e5c/volumes" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.496041 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f07154ae-c1f9-42af-9327-e211d7199c82" path="/var/lib/kubelet/pods/f07154ae-c1f9-42af-9327-e211d7199c82/volumes" Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.611819 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc"] Dec 12 14:15:09 crc kubenswrapper[5113]: I1212 14:15:09.652074 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9"] Dec 12 14:15:10 crc kubenswrapper[5113]: I1212 14:15:10.069017 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" event={"ID":"652e830c-195f-43a7-9a39-9795c10831e0","Type":"ContainerStarted","Data":"fad6893fd8f988a7d8443eb0c77be1951aa259110846f06ce8f5d128871ae376"} Dec 12 14:15:10 crc kubenswrapper[5113]: I1212 14:15:10.069059 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" event={"ID":"652e830c-195f-43a7-9a39-9795c10831e0","Type":"ContainerStarted","Data":"5998bb9df8db82ec4f48d855ca5c1c39b557ecc2ec6b1a6e2760ff8af38cd6fa"} Dec 12 14:15:10 crc kubenswrapper[5113]: I1212 14:15:10.069311 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:10 crc kubenswrapper[5113]: I1212 14:15:10.072013 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" event={"ID":"1828646e-b717-40d5-af1e-073589f081ce","Type":"ContainerStarted","Data":"ae38572d0c98f4bd5b67cbee0a0bbf55b0e992fc1e38039d8eee905cac172432"} Dec 12 14:15:10 crc kubenswrapper[5113]: I1212 14:15:10.072051 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" event={"ID":"1828646e-b717-40d5-af1e-073589f081ce","Type":"ContainerStarted","Data":"1e7ac2fd3f3f1d93c91bc7348ffc93c26c13102386aa8244c3c383853a55dba7"} Dec 12 14:15:10 crc kubenswrapper[5113]: I1212 14:15:10.072241 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:10 crc kubenswrapper[5113]: I1212 14:15:10.103992 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" podStartSLOduration=2.103979065 podStartE2EDuration="2.103979065s" podCreationTimestamp="2025-12-12 14:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:15:10.100474853 +0000 UTC m=+292.935724690" watchObservedRunningTime="2025-12-12 14:15:10.103979065 +0000 UTC m=+292.939228892" Dec 12 14:15:10 crc kubenswrapper[5113]: I1212 14:15:10.104440 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" podStartSLOduration=2.104433419 podStartE2EDuration="2.104433419s" podCreationTimestamp="2025-12-12 14:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:15:10.085474224 +0000 UTC m=+292.920724051" watchObservedRunningTime="2025-12-12 14:15:10.104433419 +0000 UTC m=+292.939683246" Dec 12 14:15:10 crc kubenswrapper[5113]: I1212 14:15:10.358659 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:10 crc kubenswrapper[5113]: I1212 14:15:10.698430 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 14:15:10 crc kubenswrapper[5113]: I1212 14:15:10.849248 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9"] Dec 12 14:15:10 crc kubenswrapper[5113]: I1212 14:15:10.867593 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc"] Dec 12 14:15:11 crc kubenswrapper[5113]: I1212 14:15:11.069323 5113 patch_prober.go:28] interesting pod/route-controller-manager-8cc84d76-zwhxc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 14:15:11 crc kubenswrapper[5113]: I1212 14:15:11.069418 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" podUID="652e830c-195f-43a7-9a39-9795c10831e0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 14:15:11 crc kubenswrapper[5113]: I1212 14:15:11.274823 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.083159 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" podUID="1828646e-b717-40d5-af1e-073589f081ce" containerName="controller-manager" containerID="cri-o://ae38572d0c98f4bd5b67cbee0a0bbf55b0e992fc1e38039d8eee905cac172432" gracePeriod=30 Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.083229 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" podUID="652e830c-195f-43a7-9a39-9795c10831e0" containerName="route-controller-manager" containerID="cri-o://fad6893fd8f988a7d8443eb0c77be1951aa259110846f06ce8f5d128871ae376" gracePeriod=30 Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.475558 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.481649 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.567933 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-client-ca\") pod \"1828646e-b717-40d5-af1e-073589f081ce\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.568094 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1828646e-b717-40d5-af1e-073589f081ce-serving-cert\") pod \"1828646e-b717-40d5-af1e-073589f081ce\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.568163 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-proxy-ca-bundles\") pod \"1828646e-b717-40d5-af1e-073589f081ce\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.568212 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdwpt\" (UniqueName: \"kubernetes.io/projected/1828646e-b717-40d5-af1e-073589f081ce-kube-api-access-jdwpt\") pod \"1828646e-b717-40d5-af1e-073589f081ce\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.568295 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1828646e-b717-40d5-af1e-073589f081ce-tmp\") pod \"1828646e-b717-40d5-af1e-073589f081ce\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.568319 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-config\") pod \"1828646e-b717-40d5-af1e-073589f081ce\" (UID: \"1828646e-b717-40d5-af1e-073589f081ce\") " Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.568629 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1828646e-b717-40d5-af1e-073589f081ce-tmp" (OuterVolumeSpecName: "tmp") pod "1828646e-b717-40d5-af1e-073589f081ce" (UID: "1828646e-b717-40d5-af1e-073589f081ce"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.568908 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-client-ca" (OuterVolumeSpecName: "client-ca") pod "1828646e-b717-40d5-af1e-073589f081ce" (UID: "1828646e-b717-40d5-af1e-073589f081ce"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.569173 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-config" (OuterVolumeSpecName: "config") pod "1828646e-b717-40d5-af1e-073589f081ce" (UID: "1828646e-b717-40d5-af1e-073589f081ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.569435 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1828646e-b717-40d5-af1e-073589f081ce" (UID: "1828646e-b717-40d5-af1e-073589f081ce"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.575499 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1828646e-b717-40d5-af1e-073589f081ce-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1828646e-b717-40d5-af1e-073589f081ce" (UID: "1828646e-b717-40d5-af1e-073589f081ce"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.577676 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1828646e-b717-40d5-af1e-073589f081ce-kube-api-access-jdwpt" (OuterVolumeSpecName: "kube-api-access-jdwpt") pod "1828646e-b717-40d5-af1e-073589f081ce" (UID: "1828646e-b717-40d5-af1e-073589f081ce"). InnerVolumeSpecName "kube-api-access-jdwpt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.580566 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d74786f5f-6nfz4"] Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.581350 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="652e830c-195f-43a7-9a39-9795c10831e0" containerName="route-controller-manager" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.581374 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="652e830c-195f-43a7-9a39-9795c10831e0" containerName="route-controller-manager" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.581427 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1828646e-b717-40d5-af1e-073589f081ce" containerName="controller-manager" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.581435 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="1828646e-b717-40d5-af1e-073589f081ce" containerName="controller-manager" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.581545 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="652e830c-195f-43a7-9a39-9795c10831e0" containerName="route-controller-manager" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.581571 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="1828646e-b717-40d5-af1e-073589f081ce" containerName="controller-manager" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.589471 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.607403 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d74786f5f-6nfz4"] Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.612465 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7"] Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.616349 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.636828 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7"] Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.669414 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhl22\" (UniqueName: \"kubernetes.io/projected/652e830c-195f-43a7-9a39-9795c10831e0-kube-api-access-xhl22\") pod \"652e830c-195f-43a7-9a39-9795c10831e0\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.669478 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/652e830c-195f-43a7-9a39-9795c10831e0-tmp\") pod \"652e830c-195f-43a7-9a39-9795c10831e0\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.669803 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/652e830c-195f-43a7-9a39-9795c10831e0-tmp" (OuterVolumeSpecName: "tmp") pod "652e830c-195f-43a7-9a39-9795c10831e0" (UID: "652e830c-195f-43a7-9a39-9795c10831e0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.669989 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/652e830c-195f-43a7-9a39-9795c10831e0-config\") pod \"652e830c-195f-43a7-9a39-9795c10831e0\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.670532 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/652e830c-195f-43a7-9a39-9795c10831e0-config" (OuterVolumeSpecName: "config") pod "652e830c-195f-43a7-9a39-9795c10831e0" (UID: "652e830c-195f-43a7-9a39-9795c10831e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.670619 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/652e830c-195f-43a7-9a39-9795c10831e0-client-ca\") pod \"652e830c-195f-43a7-9a39-9795c10831e0\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.671101 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/652e830c-195f-43a7-9a39-9795c10831e0-client-ca" (OuterVolumeSpecName: "client-ca") pod "652e830c-195f-43a7-9a39-9795c10831e0" (UID: "652e830c-195f-43a7-9a39-9795c10831e0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.671157 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/652e830c-195f-43a7-9a39-9795c10831e0-serving-cert\") pod \"652e830c-195f-43a7-9a39-9795c10831e0\" (UID: \"652e830c-195f-43a7-9a39-9795c10831e0\") " Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.672590 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.672615 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jdwpt\" (UniqueName: \"kubernetes.io/projected/1828646e-b717-40d5-af1e-073589f081ce-kube-api-access-jdwpt\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.672629 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/652e830c-195f-43a7-9a39-9795c10831e0-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.672638 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1828646e-b717-40d5-af1e-073589f081ce-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.672647 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.672655 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1828646e-b717-40d5-af1e-073589f081ce-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.672664 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/652e830c-195f-43a7-9a39-9795c10831e0-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.672673 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/652e830c-195f-43a7-9a39-9795c10831e0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.672681 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1828646e-b717-40d5-af1e-073589f081ce-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.673062 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652e830c-195f-43a7-9a39-9795c10831e0-kube-api-access-xhl22" (OuterVolumeSpecName: "kube-api-access-xhl22") pod "652e830c-195f-43a7-9a39-9795c10831e0" (UID: "652e830c-195f-43a7-9a39-9795c10831e0"). InnerVolumeSpecName "kube-api-access-xhl22". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.673242 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/652e830c-195f-43a7-9a39-9795c10831e0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "652e830c-195f-43a7-9a39-9795c10831e0" (UID: "652e830c-195f-43a7-9a39-9795c10831e0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.773686 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nscsl\" (UniqueName: \"kubernetes.io/projected/0863d626-d258-493e-81fa-7da6dadfa40e-kube-api-access-nscsl\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.773754 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0863d626-d258-493e-81fa-7da6dadfa40e-tmp\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.773834 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0863d626-d258-493e-81fa-7da6dadfa40e-config\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.773891 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0863d626-d258-493e-81fa-7da6dadfa40e-client-ca\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.773931 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b1b3f46c-2593-47aa-90af-8d3603657a53-tmp\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.773970 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-config\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.774099 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0863d626-d258-493e-81fa-7da6dadfa40e-serving-cert\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.774166 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-proxy-ca-bundles\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.774229 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j7sz\" (UniqueName: \"kubernetes.io/projected/b1b3f46c-2593-47aa-90af-8d3603657a53-kube-api-access-9j7sz\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.774271 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b3f46c-2593-47aa-90af-8d3603657a53-serving-cert\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.774318 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-client-ca\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.774467 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xhl22\" (UniqueName: \"kubernetes.io/projected/652e830c-195f-43a7-9a39-9795c10831e0-kube-api-access-xhl22\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.774500 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/652e830c-195f-43a7-9a39-9795c10831e0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.875451 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nscsl\" (UniqueName: \"kubernetes.io/projected/0863d626-d258-493e-81fa-7da6dadfa40e-kube-api-access-nscsl\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.875556 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0863d626-d258-493e-81fa-7da6dadfa40e-tmp\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.875589 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0863d626-d258-493e-81fa-7da6dadfa40e-config\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.875614 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0863d626-d258-493e-81fa-7da6dadfa40e-client-ca\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.875644 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b1b3f46c-2593-47aa-90af-8d3603657a53-tmp\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.875679 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-config\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.875701 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0863d626-d258-493e-81fa-7da6dadfa40e-serving-cert\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.875722 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-proxy-ca-bundles\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.875756 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9j7sz\" (UniqueName: \"kubernetes.io/projected/b1b3f46c-2593-47aa-90af-8d3603657a53-kube-api-access-9j7sz\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.875787 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b3f46c-2593-47aa-90af-8d3603657a53-serving-cert\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.875809 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-client-ca\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.876383 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b1b3f46c-2593-47aa-90af-8d3603657a53-tmp\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.876916 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0863d626-d258-493e-81fa-7da6dadfa40e-tmp\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.877048 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-client-ca\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.877333 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-config\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.877637 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0863d626-d258-493e-81fa-7da6dadfa40e-config\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.877665 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-proxy-ca-bundles\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.878066 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0863d626-d258-493e-81fa-7da6dadfa40e-client-ca\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.881756 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0863d626-d258-493e-81fa-7da6dadfa40e-serving-cert\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.881769 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b3f46c-2593-47aa-90af-8d3603657a53-serving-cert\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.894876 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nscsl\" (UniqueName: \"kubernetes.io/projected/0863d626-d258-493e-81fa-7da6dadfa40e-kube-api-access-nscsl\") pod \"route-controller-manager-68bf8c9dc6-szmn7\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.897096 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j7sz\" (UniqueName: \"kubernetes.io/projected/b1b3f46c-2593-47aa-90af-8d3603657a53-kube-api-access-9j7sz\") pod \"controller-manager-d74786f5f-6nfz4\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.909799 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:12 crc kubenswrapper[5113]: I1212 14:15:12.937138 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.090840 5113 generic.go:358] "Generic (PLEG): container finished" podID="652e830c-195f-43a7-9a39-9795c10831e0" containerID="fad6893fd8f988a7d8443eb0c77be1951aa259110846f06ce8f5d128871ae376" exitCode=0 Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.090942 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.090946 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" event={"ID":"652e830c-195f-43a7-9a39-9795c10831e0","Type":"ContainerDied","Data":"fad6893fd8f988a7d8443eb0c77be1951aa259110846f06ce8f5d128871ae376"} Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.090988 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc" event={"ID":"652e830c-195f-43a7-9a39-9795c10831e0","Type":"ContainerDied","Data":"5998bb9df8db82ec4f48d855ca5c1c39b557ecc2ec6b1a6e2760ff8af38cd6fa"} Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.091007 5113 scope.go:117] "RemoveContainer" containerID="fad6893fd8f988a7d8443eb0c77be1951aa259110846f06ce8f5d128871ae376" Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.100724 5113 generic.go:358] "Generic (PLEG): container finished" podID="1828646e-b717-40d5-af1e-073589f081ce" containerID="ae38572d0c98f4bd5b67cbee0a0bbf55b0e992fc1e38039d8eee905cac172432" exitCode=0 Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.100828 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" event={"ID":"1828646e-b717-40d5-af1e-073589f081ce","Type":"ContainerDied","Data":"ae38572d0c98f4bd5b67cbee0a0bbf55b0e992fc1e38039d8eee905cac172432"} Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.100866 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" event={"ID":"1828646e-b717-40d5-af1e-073589f081ce","Type":"ContainerDied","Data":"1e7ac2fd3f3f1d93c91bc7348ffc93c26c13102386aa8244c3c383853a55dba7"} Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.100947 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9" Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.115034 5113 scope.go:117] "RemoveContainer" containerID="fad6893fd8f988a7d8443eb0c77be1951aa259110846f06ce8f5d128871ae376" Dec 12 14:15:13 crc kubenswrapper[5113]: E1212 14:15:13.115504 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fad6893fd8f988a7d8443eb0c77be1951aa259110846f06ce8f5d128871ae376\": container with ID starting with fad6893fd8f988a7d8443eb0c77be1951aa259110846f06ce8f5d128871ae376 not found: ID does not exist" containerID="fad6893fd8f988a7d8443eb0c77be1951aa259110846f06ce8f5d128871ae376" Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.115590 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fad6893fd8f988a7d8443eb0c77be1951aa259110846f06ce8f5d128871ae376"} err="failed to get container status \"fad6893fd8f988a7d8443eb0c77be1951aa259110846f06ce8f5d128871ae376\": rpc error: code = NotFound desc = could not find container \"fad6893fd8f988a7d8443eb0c77be1951aa259110846f06ce8f5d128871ae376\": container with ID starting with fad6893fd8f988a7d8443eb0c77be1951aa259110846f06ce8f5d128871ae376 not found: ID does not exist" Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.115628 5113 scope.go:117] "RemoveContainer" containerID="ae38572d0c98f4bd5b67cbee0a0bbf55b0e992fc1e38039d8eee905cac172432" Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.130349 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc"] Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.140257 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8cc84d76-zwhxc"] Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.145677 5113 scope.go:117] "RemoveContainer" containerID="ae38572d0c98f4bd5b67cbee0a0bbf55b0e992fc1e38039d8eee905cac172432" Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.147876 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9"] Dec 12 14:15:13 crc kubenswrapper[5113]: E1212 14:15:13.149253 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae38572d0c98f4bd5b67cbee0a0bbf55b0e992fc1e38039d8eee905cac172432\": container with ID starting with ae38572d0c98f4bd5b67cbee0a0bbf55b0e992fc1e38039d8eee905cac172432 not found: ID does not exist" containerID="ae38572d0c98f4bd5b67cbee0a0bbf55b0e992fc1e38039d8eee905cac172432" Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.149551 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae38572d0c98f4bd5b67cbee0a0bbf55b0e992fc1e38039d8eee905cac172432"} err="failed to get container status \"ae38572d0c98f4bd5b67cbee0a0bbf55b0e992fc1e38039d8eee905cac172432\": rpc error: code = NotFound desc = could not find container \"ae38572d0c98f4bd5b67cbee0a0bbf55b0e992fc1e38039d8eee905cac172432\": container with ID starting with ae38572d0c98f4bd5b67cbee0a0bbf55b0e992fc1e38039d8eee905cac172432 not found: ID does not exist" Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.159686 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b6495d9d9-s8sv9"] Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.195606 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d74786f5f-6nfz4"] Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.245703 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7"] Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.514819 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1828646e-b717-40d5-af1e-073589f081ce" path="/var/lib/kubelet/pods/1828646e-b717-40d5-af1e-073589f081ce/volumes" Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.515980 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="652e830c-195f-43a7-9a39-9795c10831e0" path="/var/lib/kubelet/pods/652e830c-195f-43a7-9a39-9795c10831e0/volumes" Dec 12 14:15:13 crc kubenswrapper[5113]: I1212 14:15:13.595349 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 12 14:15:14 crc kubenswrapper[5113]: I1212 14:15:14.110945 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" event={"ID":"0863d626-d258-493e-81fa-7da6dadfa40e","Type":"ContainerStarted","Data":"ea0a59221452e97b0adf3b7a50c25b59032d2dafc49beb916d6545fd9984eeda"} Dec 12 14:15:14 crc kubenswrapper[5113]: I1212 14:15:14.111260 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" event={"ID":"0863d626-d258-493e-81fa-7da6dadfa40e","Type":"ContainerStarted","Data":"b9774c5b92ba33dc3cb4690b369a38881981423305d8b5e645bde189e1dfda80"} Dec 12 14:15:14 crc kubenswrapper[5113]: I1212 14:15:14.112949 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:14 crc kubenswrapper[5113]: I1212 14:15:14.114895 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" event={"ID":"b1b3f46c-2593-47aa-90af-8d3603657a53","Type":"ContainerStarted","Data":"c91ae85c1ab59a9937f99b359b04bc6c1d8da5f84f2e4e31e9cc85d5131a439c"} Dec 12 14:15:14 crc kubenswrapper[5113]: I1212 14:15:14.114936 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" event={"ID":"b1b3f46c-2593-47aa-90af-8d3603657a53","Type":"ContainerStarted","Data":"882379ec696ac325ac622ca8c8320b166593fca05e286cfe5b6e806b8b6258a8"} Dec 12 14:15:14 crc kubenswrapper[5113]: I1212 14:15:14.115485 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:14 crc kubenswrapper[5113]: I1212 14:15:14.118858 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:14 crc kubenswrapper[5113]: I1212 14:15:14.120221 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:14 crc kubenswrapper[5113]: I1212 14:15:14.133843 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" podStartSLOduration=3.13382914 podStartE2EDuration="3.13382914s" podCreationTimestamp="2025-12-12 14:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:15:14.130800123 +0000 UTC m=+296.966049980" watchObservedRunningTime="2025-12-12 14:15:14.13382914 +0000 UTC m=+296.969078967" Dec 12 14:15:14 crc kubenswrapper[5113]: I1212 14:15:14.159597 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" podStartSLOduration=4.159578192 podStartE2EDuration="4.159578192s" podCreationTimestamp="2025-12-12 14:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:15:14.159473648 +0000 UTC m=+296.994723485" watchObservedRunningTime="2025-12-12 14:15:14.159578192 +0000 UTC m=+296.994828019" Dec 12 14:15:14 crc kubenswrapper[5113]: I1212 14:15:14.592839 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:15:15 crc kubenswrapper[5113]: I1212 14:15:15.110260 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 14:15:15 crc kubenswrapper[5113]: I1212 14:15:15.626267 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 12 14:15:17 crc kubenswrapper[5113]: I1212 14:15:17.748408 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54268: no serving certificate available for the kubelet" Dec 12 14:15:18 crc kubenswrapper[5113]: I1212 14:15:18.165481 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:15:18 crc kubenswrapper[5113]: I1212 14:15:18.166066 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:15:20 crc kubenswrapper[5113]: I1212 14:15:20.902366 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:15:20 crc kubenswrapper[5113]: I1212 14:15:20.903015 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:15:20 crc kubenswrapper[5113]: I1212 14:15:20.903095 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:15:20 crc kubenswrapper[5113]: I1212 14:15:20.903605 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e7e34f9b9a8d598a4a4dc4523ac4110df25d8397f7991be7de07cc59bc98748"} pod="openshift-machine-config-operator/machine-config-daemon-5dn52" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:15:20 crc kubenswrapper[5113]: I1212 14:15:20.903665 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" containerID="cri-o://0e7e34f9b9a8d598a4a4dc4523ac4110df25d8397f7991be7de07cc59bc98748" gracePeriod=600 Dec 12 14:15:21 crc kubenswrapper[5113]: I1212 14:15:21.028628 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 14:15:21 crc kubenswrapper[5113]: I1212 14:15:21.152404 5113 generic.go:358] "Generic (PLEG): container finished" podID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerID="0e7e34f9b9a8d598a4a4dc4523ac4110df25d8397f7991be7de07cc59bc98748" exitCode=0 Dec 12 14:15:21 crc kubenswrapper[5113]: I1212 14:15:21.152646 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerDied","Data":"0e7e34f9b9a8d598a4a4dc4523ac4110df25d8397f7991be7de07cc59bc98748"} Dec 12 14:15:22 crc kubenswrapper[5113]: I1212 14:15:22.159857 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerStarted","Data":"28d73d1a270b748c99c173d6242f40f212be4531ede388f7402b08f661b2e0f0"} Dec 12 14:15:23 crc kubenswrapper[5113]: I1212 14:15:23.113244 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 12 14:15:28 crc kubenswrapper[5113]: I1212 14:15:28.658385 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d74786f5f-6nfz4"] Dec 12 14:15:28 crc kubenswrapper[5113]: I1212 14:15:28.659876 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" podUID="b1b3f46c-2593-47aa-90af-8d3603657a53" containerName="controller-manager" containerID="cri-o://c91ae85c1ab59a9937f99b359b04bc6c1d8da5f84f2e4e31e9cc85d5131a439c" gracePeriod=30 Dec 12 14:15:28 crc kubenswrapper[5113]: I1212 14:15:28.686310 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7"] Dec 12 14:15:28 crc kubenswrapper[5113]: I1212 14:15:28.686658 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" podUID="0863d626-d258-493e-81fa-7da6dadfa40e" containerName="route-controller-manager" containerID="cri-o://ea0a59221452e97b0adf3b7a50c25b59032d2dafc49beb916d6545fd9984eeda" gracePeriod=30 Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.199938 5113 generic.go:358] "Generic (PLEG): container finished" podID="b1b3f46c-2593-47aa-90af-8d3603657a53" containerID="c91ae85c1ab59a9937f99b359b04bc6c1d8da5f84f2e4e31e9cc85d5131a439c" exitCode=0 Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.200064 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" event={"ID":"b1b3f46c-2593-47aa-90af-8d3603657a53","Type":"ContainerDied","Data":"c91ae85c1ab59a9937f99b359b04bc6c1d8da5f84f2e4e31e9cc85d5131a439c"} Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.202172 5113 generic.go:358] "Generic (PLEG): container finished" podID="0863d626-d258-493e-81fa-7da6dadfa40e" containerID="ea0a59221452e97b0adf3b7a50c25b59032d2dafc49beb916d6545fd9984eeda" exitCode=0 Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.202239 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" event={"ID":"0863d626-d258-493e-81fa-7da6dadfa40e","Type":"ContainerDied","Data":"ea0a59221452e97b0adf3b7a50c25b59032d2dafc49beb916d6545fd9984eeda"} Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.202278 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" event={"ID":"0863d626-d258-493e-81fa-7da6dadfa40e","Type":"ContainerDied","Data":"b9774c5b92ba33dc3cb4690b369a38881981423305d8b5e645bde189e1dfda80"} Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.202316 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9774c5b92ba33dc3cb4690b369a38881981423305d8b5e645bde189e1dfda80" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.213671 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.242293 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4"] Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.243021 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0863d626-d258-493e-81fa-7da6dadfa40e" containerName="route-controller-manager" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.243045 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="0863d626-d258-493e-81fa-7da6dadfa40e" containerName="route-controller-manager" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.243185 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="0863d626-d258-493e-81fa-7da6dadfa40e" containerName="route-controller-manager" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.246719 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.251444 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4"] Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.359613 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0863d626-d258-493e-81fa-7da6dadfa40e-serving-cert\") pod \"0863d626-d258-493e-81fa-7da6dadfa40e\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.359696 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0863d626-d258-493e-81fa-7da6dadfa40e-tmp\") pod \"0863d626-d258-493e-81fa-7da6dadfa40e\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.359726 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nscsl\" (UniqueName: \"kubernetes.io/projected/0863d626-d258-493e-81fa-7da6dadfa40e-kube-api-access-nscsl\") pod \"0863d626-d258-493e-81fa-7da6dadfa40e\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.359814 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0863d626-d258-493e-81fa-7da6dadfa40e-config\") pod \"0863d626-d258-493e-81fa-7da6dadfa40e\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.360236 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0863d626-d258-493e-81fa-7da6dadfa40e-tmp" (OuterVolumeSpecName: "tmp") pod "0863d626-d258-493e-81fa-7da6dadfa40e" (UID: "0863d626-d258-493e-81fa-7da6dadfa40e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.360295 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0863d626-d258-493e-81fa-7da6dadfa40e-client-ca\") pod \"0863d626-d258-493e-81fa-7da6dadfa40e\" (UID: \"0863d626-d258-493e-81fa-7da6dadfa40e\") " Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.360530 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be0fa70f-53bd-429b-a84c-5ca862cfaa57-tmp\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.360563 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qh97\" (UniqueName: \"kubernetes.io/projected/be0fa70f-53bd-429b-a84c-5ca862cfaa57-kube-api-access-9qh97\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.360600 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be0fa70f-53bd-429b-a84c-5ca862cfaa57-serving-cert\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.360666 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be0fa70f-53bd-429b-a84c-5ca862cfaa57-client-ca\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.360649 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0863d626-d258-493e-81fa-7da6dadfa40e-config" (OuterVolumeSpecName: "config") pod "0863d626-d258-493e-81fa-7da6dadfa40e" (UID: "0863d626-d258-493e-81fa-7da6dadfa40e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.360726 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be0fa70f-53bd-429b-a84c-5ca862cfaa57-config\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.360800 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0863d626-d258-493e-81fa-7da6dadfa40e-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.360813 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0863d626-d258-493e-81fa-7da6dadfa40e-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.361061 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0863d626-d258-493e-81fa-7da6dadfa40e-client-ca" (OuterVolumeSpecName: "client-ca") pod "0863d626-d258-493e-81fa-7da6dadfa40e" (UID: "0863d626-d258-493e-81fa-7da6dadfa40e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.366256 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0863d626-d258-493e-81fa-7da6dadfa40e-kube-api-access-nscsl" (OuterVolumeSpecName: "kube-api-access-nscsl") pod "0863d626-d258-493e-81fa-7da6dadfa40e" (UID: "0863d626-d258-493e-81fa-7da6dadfa40e"). InnerVolumeSpecName "kube-api-access-nscsl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.368517 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0863d626-d258-493e-81fa-7da6dadfa40e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0863d626-d258-493e-81fa-7da6dadfa40e" (UID: "0863d626-d258-493e-81fa-7da6dadfa40e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.395947 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.423047 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5747bf667d-jvkww"] Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.424157 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1b3f46c-2593-47aa-90af-8d3603657a53" containerName="controller-manager" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.424179 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b3f46c-2593-47aa-90af-8d3603657a53" containerName="controller-manager" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.424313 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="b1b3f46c-2593-47aa-90af-8d3603657a53" containerName="controller-manager" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.442140 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.444886 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5747bf667d-jvkww"] Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.461734 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be0fa70f-53bd-429b-a84c-5ca862cfaa57-tmp\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.461787 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9qh97\" (UniqueName: \"kubernetes.io/projected/be0fa70f-53bd-429b-a84c-5ca862cfaa57-kube-api-access-9qh97\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.461828 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be0fa70f-53bd-429b-a84c-5ca862cfaa57-serving-cert\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.461866 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be0fa70f-53bd-429b-a84c-5ca862cfaa57-client-ca\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.461900 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be0fa70f-53bd-429b-a84c-5ca862cfaa57-config\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.461973 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0863d626-d258-493e-81fa-7da6dadfa40e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.461987 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nscsl\" (UniqueName: \"kubernetes.io/projected/0863d626-d258-493e-81fa-7da6dadfa40e-kube-api-access-nscsl\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.462002 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0863d626-d258-493e-81fa-7da6dadfa40e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.463517 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be0fa70f-53bd-429b-a84c-5ca862cfaa57-config\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.463898 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be0fa70f-53bd-429b-a84c-5ca862cfaa57-tmp\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.466789 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be0fa70f-53bd-429b-a84c-5ca862cfaa57-client-ca\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.470394 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be0fa70f-53bd-429b-a84c-5ca862cfaa57-serving-cert\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.488964 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qh97\" (UniqueName: \"kubernetes.io/projected/be0fa70f-53bd-429b-a84c-5ca862cfaa57-kube-api-access-9qh97\") pod \"route-controller-manager-fb65ff8f5-bsgf4\" (UID: \"be0fa70f-53bd-429b-a84c-5ca862cfaa57\") " pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.563011 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b3f46c-2593-47aa-90af-8d3603657a53-serving-cert\") pod \"b1b3f46c-2593-47aa-90af-8d3603657a53\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.563164 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-client-ca\") pod \"b1b3f46c-2593-47aa-90af-8d3603657a53\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.563207 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j7sz\" (UniqueName: \"kubernetes.io/projected/b1b3f46c-2593-47aa-90af-8d3603657a53-kube-api-access-9j7sz\") pod \"b1b3f46c-2593-47aa-90af-8d3603657a53\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.563262 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-config\") pod \"b1b3f46c-2593-47aa-90af-8d3603657a53\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.563304 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-proxy-ca-bundles\") pod \"b1b3f46c-2593-47aa-90af-8d3603657a53\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.563362 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b1b3f46c-2593-47aa-90af-8d3603657a53-tmp\") pod \"b1b3f46c-2593-47aa-90af-8d3603657a53\" (UID: \"b1b3f46c-2593-47aa-90af-8d3603657a53\") " Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.563474 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4df00eb-32a9-4b70-8bcb-02e965cac36d-config\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.563498 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4df00eb-32a9-4b70-8bcb-02e965cac36d-proxy-ca-bundles\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.563553 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4df00eb-32a9-4b70-8bcb-02e965cac36d-client-ca\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.563574 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl4gl\" (UniqueName: \"kubernetes.io/projected/f4df00eb-32a9-4b70-8bcb-02e965cac36d-kube-api-access-gl4gl\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.563644 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4df00eb-32a9-4b70-8bcb-02e965cac36d-serving-cert\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.563671 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f4df00eb-32a9-4b70-8bcb-02e965cac36d-tmp\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.564149 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-client-ca" (OuterVolumeSpecName: "client-ca") pod "b1b3f46c-2593-47aa-90af-8d3603657a53" (UID: "b1b3f46c-2593-47aa-90af-8d3603657a53"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.564234 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-config" (OuterVolumeSpecName: "config") pod "b1b3f46c-2593-47aa-90af-8d3603657a53" (UID: "b1b3f46c-2593-47aa-90af-8d3603657a53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.564372 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b1b3f46c-2593-47aa-90af-8d3603657a53" (UID: "b1b3f46c-2593-47aa-90af-8d3603657a53"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.564438 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b3f46c-2593-47aa-90af-8d3603657a53-tmp" (OuterVolumeSpecName: "tmp") pod "b1b3f46c-2593-47aa-90af-8d3603657a53" (UID: "b1b3f46c-2593-47aa-90af-8d3603657a53"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.566861 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b3f46c-2593-47aa-90af-8d3603657a53-kube-api-access-9j7sz" (OuterVolumeSpecName: "kube-api-access-9j7sz") pod "b1b3f46c-2593-47aa-90af-8d3603657a53" (UID: "b1b3f46c-2593-47aa-90af-8d3603657a53"). InnerVolumeSpecName "kube-api-access-9j7sz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.567278 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b3f46c-2593-47aa-90af-8d3603657a53-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b1b3f46c-2593-47aa-90af-8d3603657a53" (UID: "b1b3f46c-2593-47aa-90af-8d3603657a53"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.574450 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.665030 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4df00eb-32a9-4b70-8bcb-02e965cac36d-client-ca\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.665111 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gl4gl\" (UniqueName: \"kubernetes.io/projected/f4df00eb-32a9-4b70-8bcb-02e965cac36d-kube-api-access-gl4gl\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.665210 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4df00eb-32a9-4b70-8bcb-02e965cac36d-serving-cert\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.665242 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f4df00eb-32a9-4b70-8bcb-02e965cac36d-tmp\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.665293 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4df00eb-32a9-4b70-8bcb-02e965cac36d-config\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.665320 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4df00eb-32a9-4b70-8bcb-02e965cac36d-proxy-ca-bundles\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.665438 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b1b3f46c-2593-47aa-90af-8d3603657a53-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.665453 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b3f46c-2593-47aa-90af-8d3603657a53-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.665465 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.665476 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9j7sz\" (UniqueName: \"kubernetes.io/projected/b1b3f46c-2593-47aa-90af-8d3603657a53-kube-api-access-9j7sz\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.665489 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.665499 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b3f46c-2593-47aa-90af-8d3603657a53-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.666255 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4df00eb-32a9-4b70-8bcb-02e965cac36d-client-ca\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.666363 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f4df00eb-32a9-4b70-8bcb-02e965cac36d-tmp\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.667986 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4df00eb-32a9-4b70-8bcb-02e965cac36d-proxy-ca-bundles\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.668044 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4df00eb-32a9-4b70-8bcb-02e965cac36d-config\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.669453 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4df00eb-32a9-4b70-8bcb-02e965cac36d-serving-cert\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.683707 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl4gl\" (UniqueName: \"kubernetes.io/projected/f4df00eb-32a9-4b70-8bcb-02e965cac36d-kube-api-access-gl4gl\") pod \"controller-manager-5747bf667d-jvkww\" (UID: \"f4df00eb-32a9-4b70-8bcb-02e965cac36d\") " pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.761822 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:29 crc kubenswrapper[5113]: I1212 14:15:29.876138 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4"] Dec 12 14:15:30 crc kubenswrapper[5113]: I1212 14:15:30.210436 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" event={"ID":"be0fa70f-53bd-429b-a84c-5ca862cfaa57","Type":"ContainerStarted","Data":"08a9831d73416e560316efc60400cbbd4c7aa5fb7d55fca0d38e23f36c1f9438"} Dec 12 14:15:30 crc kubenswrapper[5113]: I1212 14:15:30.210875 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" event={"ID":"be0fa70f-53bd-429b-a84c-5ca862cfaa57","Type":"ContainerStarted","Data":"da2c6f8265119dd729c04421d480e21ffc353490e2e4b77c8d27feb67d6bc536"} Dec 12 14:15:30 crc kubenswrapper[5113]: I1212 14:15:30.210990 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:30 crc kubenswrapper[5113]: I1212 14:15:30.212589 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" event={"ID":"b1b3f46c-2593-47aa-90af-8d3603657a53","Type":"ContainerDied","Data":"882379ec696ac325ac622ca8c8320b166593fca05e286cfe5b6e806b8b6258a8"} Dec 12 14:15:30 crc kubenswrapper[5113]: I1212 14:15:30.212625 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d74786f5f-6nfz4" Dec 12 14:15:30 crc kubenswrapper[5113]: I1212 14:15:30.212682 5113 scope.go:117] "RemoveContainer" containerID="c91ae85c1ab59a9937f99b359b04bc6c1d8da5f84f2e4e31e9cc85d5131a439c" Dec 12 14:15:30 crc kubenswrapper[5113]: I1212 14:15:30.212876 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7" Dec 12 14:15:30 crc kubenswrapper[5113]: I1212 14:15:30.216789 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5747bf667d-jvkww"] Dec 12 14:15:30 crc kubenswrapper[5113]: W1212 14:15:30.273823 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4df00eb_32a9_4b70_8bcb_02e965cac36d.slice/crio-d5659134721b87c0e7a648e0c607dcb9ad2e369c7aab471a6ca3a2bc9bed2177 WatchSource:0}: Error finding container d5659134721b87c0e7a648e0c607dcb9ad2e369c7aab471a6ca3a2bc9bed2177: Status 404 returned error can't find the container with id d5659134721b87c0e7a648e0c607dcb9ad2e369c7aab471a6ca3a2bc9bed2177 Dec 12 14:15:30 crc kubenswrapper[5113]: I1212 14:15:30.274295 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" podStartSLOduration=2.274276449 podStartE2EDuration="2.274276449s" podCreationTimestamp="2025-12-12 14:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:15:30.273592178 +0000 UTC m=+313.108842015" watchObservedRunningTime="2025-12-12 14:15:30.274276449 +0000 UTC m=+313.109526296" Dec 12 14:15:30 crc kubenswrapper[5113]: I1212 14:15:30.289435 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7"] Dec 12 14:15:30 crc kubenswrapper[5113]: I1212 14:15:30.297410 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-szmn7"] Dec 12 14:15:30 crc kubenswrapper[5113]: I1212 14:15:30.344849 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d74786f5f-6nfz4"] Dec 12 14:15:30 crc kubenswrapper[5113]: I1212 14:15:30.345155 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d74786f5f-6nfz4"] Dec 12 14:15:31 crc kubenswrapper[5113]: I1212 14:15:31.099946 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fb65ff8f5-bsgf4" Dec 12 14:15:31 crc kubenswrapper[5113]: I1212 14:15:31.220589 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" event={"ID":"f4df00eb-32a9-4b70-8bcb-02e965cac36d","Type":"ContainerStarted","Data":"c478cf44c643f4299e17452024916e2f0d8bc33c98958dcea96ceeb4b604a51c"} Dec 12 14:15:31 crc kubenswrapper[5113]: I1212 14:15:31.220634 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" event={"ID":"f4df00eb-32a9-4b70-8bcb-02e965cac36d","Type":"ContainerStarted","Data":"d5659134721b87c0e7a648e0c607dcb9ad2e369c7aab471a6ca3a2bc9bed2177"} Dec 12 14:15:31 crc kubenswrapper[5113]: I1212 14:15:31.220994 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:31 crc kubenswrapper[5113]: I1212 14:15:31.229179 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" Dec 12 14:15:31 crc kubenswrapper[5113]: I1212 14:15:31.240109 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5747bf667d-jvkww" podStartSLOduration=3.240091521 podStartE2EDuration="3.240091521s" podCreationTimestamp="2025-12-12 14:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:15:31.238857812 +0000 UTC m=+314.074107639" watchObservedRunningTime="2025-12-12 14:15:31.240091521 +0000 UTC m=+314.075341348" Dec 12 14:15:31 crc kubenswrapper[5113]: I1212 14:15:31.489355 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0863d626-d258-493e-81fa-7da6dadfa40e" path="/var/lib/kubelet/pods/0863d626-d258-493e-81fa-7da6dadfa40e/volumes" Dec 12 14:15:31 crc kubenswrapper[5113]: I1212 14:15:31.490169 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b3f46c-2593-47aa-90af-8d3603657a53" path="/var/lib/kubelet/pods/b1b3f46c-2593-47aa-90af-8d3603657a53/volumes" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.201186 5113 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.526306 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-dlxjn"] Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.589034 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-dlxjn"] Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.589245 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.599966 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4cb39405-f605-48cf-af86-74939d3de762-registry-certificates\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.600328 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4cb39405-f605-48cf-af86-74939d3de762-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.600381 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4cb39405-f605-48cf-af86-74939d3de762-registry-tls\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.600411 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsc8r\" (UniqueName: \"kubernetes.io/projected/4cb39405-f605-48cf-af86-74939d3de762-kube-api-access-gsc8r\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.600494 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.600819 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cb39405-f605-48cf-af86-74939d3de762-trusted-ca\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.600909 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4cb39405-f605-48cf-af86-74939d3de762-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.601027 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cb39405-f605-48cf-af86-74939d3de762-bound-sa-token\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.632096 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.702110 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cb39405-f605-48cf-af86-74939d3de762-bound-sa-token\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.702206 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4cb39405-f605-48cf-af86-74939d3de762-registry-certificates\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.702244 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4cb39405-f605-48cf-af86-74939d3de762-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.702294 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4cb39405-f605-48cf-af86-74939d3de762-registry-tls\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.702323 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gsc8r\" (UniqueName: \"kubernetes.io/projected/4cb39405-f605-48cf-af86-74939d3de762-kube-api-access-gsc8r\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.702369 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cb39405-f605-48cf-af86-74939d3de762-trusted-ca\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.702412 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4cb39405-f605-48cf-af86-74939d3de762-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.704519 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4cb39405-f605-48cf-af86-74939d3de762-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.704990 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4cb39405-f605-48cf-af86-74939d3de762-registry-certificates\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.706022 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cb39405-f605-48cf-af86-74939d3de762-trusted-ca\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.722909 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4cb39405-f605-48cf-af86-74939d3de762-registry-tls\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.722989 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4cb39405-f605-48cf-af86-74939d3de762-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.782393 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsc8r\" (UniqueName: \"kubernetes.io/projected/4cb39405-f605-48cf-af86-74939d3de762-kube-api-access-gsc8r\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.789010 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cb39405-f605-48cf-af86-74939d3de762-bound-sa-token\") pod \"image-registry-5d9d95bf5b-dlxjn\" (UID: \"4cb39405-f605-48cf-af86-74939d3de762\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:54 crc kubenswrapper[5113]: I1212 14:15:54.911398 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:55 crc kubenswrapper[5113]: I1212 14:15:55.360622 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-dlxjn"] Dec 12 14:15:55 crc kubenswrapper[5113]: I1212 14:15:55.506997 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" event={"ID":"4cb39405-f605-48cf-af86-74939d3de762","Type":"ContainerStarted","Data":"65973d6083444333b90ba46bba6f4e43a05022f2e46c95b5c6603414bd7ad55a"} Dec 12 14:15:56 crc kubenswrapper[5113]: I1212 14:15:56.516110 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" event={"ID":"4cb39405-f605-48cf-af86-74939d3de762","Type":"ContainerStarted","Data":"7862121c333f28a1d5a228ec00fb3acd6788f17dbcc1d7a2da6833cf74db3858"} Dec 12 14:15:56 crc kubenswrapper[5113]: I1212 14:15:56.516679 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:15:56 crc kubenswrapper[5113]: I1212 14:15:56.541734 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" podStartSLOduration=2.5417082300000002 podStartE2EDuration="2.54170823s" podCreationTimestamp="2025-12-12 14:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:15:56.53830233 +0000 UTC m=+339.373552187" watchObservedRunningTime="2025-12-12 14:15:56.54170823 +0000 UTC m=+339.376958057" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.105332 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xk8lq"] Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.106371 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xk8lq" podUID="7aefb209-096f-4d97-bbde-df22378e9c13" containerName="registry-server" containerID="cri-o://f56fb335b7c839cfa4be9a9d5149e3549d706c4dfd372dd8780bd94d26dd6c7a" gracePeriod=30 Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.128550 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtkxn"] Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.128867 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gtkxn" podUID="90beb70e-da16-489d-b0c8-3ced9d98deea" containerName="registry-server" containerID="cri-o://ea446cf8240020f64b053a7f202913cc32090ba323f88bb450cd10268fd6390f" gracePeriod=30 Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.136329 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-2qnd9"] Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.137222 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" podUID="fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7" containerName="marketplace-operator" containerID="cri-o://8c24388c5d6967b229870fcddace94d2e822b8ed420985fe6700b7a4273ef32a" gracePeriod=30 Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.145483 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppmfs"] Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.145835 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ppmfs" podUID="2a2741c8-968d-4f77-8be2-35619f1b1f4d" containerName="registry-server" containerID="cri-o://685dc44faab54c18fa0de2fc7a49b9453cbef7bed5e68a0546135d7457c24b29" gracePeriod=30 Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.159598 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-hkb9j"] Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.252223 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rzcm5"] Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.252492 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.252916 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rzcm5" podUID="eab91f0e-3c39-4096-8d91-329f7e9812e8" containerName="registry-server" containerID="cri-o://4d4078a33f54ea983811fb11ea9a645d1675d690c475410b84b353c0431b9d0a" gracePeriod=30 Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.254411 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-hkb9j"] Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.356738 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0508dd25-2dda-413d-9978-744802ef4487-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-hkb9j\" (UID: \"0508dd25-2dda-413d-9978-744802ef4487\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.356817 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0508dd25-2dda-413d-9978-744802ef4487-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-hkb9j\" (UID: \"0508dd25-2dda-413d-9978-744802ef4487\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.356866 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wthzc\" (UniqueName: \"kubernetes.io/projected/0508dd25-2dda-413d-9978-744802ef4487-kube-api-access-wthzc\") pod \"marketplace-operator-547dbd544d-hkb9j\" (UID: \"0508dd25-2dda-413d-9978-744802ef4487\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.356893 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0508dd25-2dda-413d-9978-744802ef4487-tmp\") pod \"marketplace-operator-547dbd544d-hkb9j\" (UID: \"0508dd25-2dda-413d-9978-744802ef4487\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.458585 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0508dd25-2dda-413d-9978-744802ef4487-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-hkb9j\" (UID: \"0508dd25-2dda-413d-9978-744802ef4487\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.458643 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0508dd25-2dda-413d-9978-744802ef4487-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-hkb9j\" (UID: \"0508dd25-2dda-413d-9978-744802ef4487\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.458674 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wthzc\" (UniqueName: \"kubernetes.io/projected/0508dd25-2dda-413d-9978-744802ef4487-kube-api-access-wthzc\") pod \"marketplace-operator-547dbd544d-hkb9j\" (UID: \"0508dd25-2dda-413d-9978-744802ef4487\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.458695 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0508dd25-2dda-413d-9978-744802ef4487-tmp\") pod \"marketplace-operator-547dbd544d-hkb9j\" (UID: \"0508dd25-2dda-413d-9978-744802ef4487\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.459198 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0508dd25-2dda-413d-9978-744802ef4487-tmp\") pod \"marketplace-operator-547dbd544d-hkb9j\" (UID: \"0508dd25-2dda-413d-9978-744802ef4487\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.460433 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0508dd25-2dda-413d-9978-744802ef4487-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-hkb9j\" (UID: \"0508dd25-2dda-413d-9978-744802ef4487\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.467854 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0508dd25-2dda-413d-9978-744802ef4487-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-hkb9j\" (UID: \"0508dd25-2dda-413d-9978-744802ef4487\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.479400 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wthzc\" (UniqueName: \"kubernetes.io/projected/0508dd25-2dda-413d-9978-744802ef4487-kube-api-access-wthzc\") pod \"marketplace-operator-547dbd544d-hkb9j\" (UID: \"0508dd25-2dda-413d-9978-744802ef4487\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.537184 5113 generic.go:358] "Generic (PLEG): container finished" podID="eab91f0e-3c39-4096-8d91-329f7e9812e8" containerID="4d4078a33f54ea983811fb11ea9a645d1675d690c475410b84b353c0431b9d0a" exitCode=0 Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.537321 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzcm5" event={"ID":"eab91f0e-3c39-4096-8d91-329f7e9812e8","Type":"ContainerDied","Data":"4d4078a33f54ea983811fb11ea9a645d1675d690c475410b84b353c0431b9d0a"} Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.538779 5113 generic.go:358] "Generic (PLEG): container finished" podID="fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7" containerID="8c24388c5d6967b229870fcddace94d2e822b8ed420985fe6700b7a4273ef32a" exitCode=0 Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.538865 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" event={"ID":"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7","Type":"ContainerDied","Data":"8c24388c5d6967b229870fcddace94d2e822b8ed420985fe6700b7a4273ef32a"} Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.540636 5113 generic.go:358] "Generic (PLEG): container finished" podID="90beb70e-da16-489d-b0c8-3ced9d98deea" containerID="ea446cf8240020f64b053a7f202913cc32090ba323f88bb450cd10268fd6390f" exitCode=0 Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.540695 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtkxn" event={"ID":"90beb70e-da16-489d-b0c8-3ced9d98deea","Type":"ContainerDied","Data":"ea446cf8240020f64b053a7f202913cc32090ba323f88bb450cd10268fd6390f"} Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.542606 5113 generic.go:358] "Generic (PLEG): container finished" podID="7aefb209-096f-4d97-bbde-df22378e9c13" containerID="f56fb335b7c839cfa4be9a9d5149e3549d706c4dfd372dd8780bd94d26dd6c7a" exitCode=0 Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.542670 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8lq" event={"ID":"7aefb209-096f-4d97-bbde-df22378e9c13","Type":"ContainerDied","Data":"f56fb335b7c839cfa4be9a9d5149e3549d706c4dfd372dd8780bd94d26dd6c7a"} Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.544873 5113 generic.go:358] "Generic (PLEG): container finished" podID="2a2741c8-968d-4f77-8be2-35619f1b1f4d" containerID="685dc44faab54c18fa0de2fc7a49b9453cbef7bed5e68a0546135d7457c24b29" exitCode=0 Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.544903 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppmfs" event={"ID":"2a2741c8-968d-4f77-8be2-35619f1b1f4d","Type":"ContainerDied","Data":"685dc44faab54c18fa0de2fc7a49b9453cbef7bed5e68a0546135d7457c24b29"} Dec 12 14:15:58 crc kubenswrapper[5113]: I1212 14:15:58.601346 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:15:59 crc kubenswrapper[5113]: I1212 14:15:59.012750 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-hkb9j"] Dec 12 14:15:59 crc kubenswrapper[5113]: W1212 14:15:59.015997 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0508dd25_2dda_413d_9978_744802ef4487.slice/crio-c5d52455504d1f154fb7f689f8c86f95fc880bda2c4f26e643714a1b395f97fe WatchSource:0}: Error finding container c5d52455504d1f154fb7f689f8c86f95fc880bda2c4f26e643714a1b395f97fe: Status 404 returned error can't find the container with id c5d52455504d1f154fb7f689f8c86f95fc880bda2c4f26e643714a1b395f97fe Dec 12 14:15:59 crc kubenswrapper[5113]: I1212 14:15:59.552446 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" event={"ID":"0508dd25-2dda-413d-9978-744802ef4487","Type":"ContainerStarted","Data":"c5d52455504d1f154fb7f689f8c86f95fc880bda2c4f26e643714a1b395f97fe"} Dec 12 14:15:59 crc kubenswrapper[5113]: I1212 14:15:59.799920 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:15:59 crc kubenswrapper[5113]: I1212 14:15:59.881139 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eab91f0e-3c39-4096-8d91-329f7e9812e8-utilities\") pod \"eab91f0e-3c39-4096-8d91-329f7e9812e8\" (UID: \"eab91f0e-3c39-4096-8d91-329f7e9812e8\") " Dec 12 14:15:59 crc kubenswrapper[5113]: I1212 14:15:59.881229 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhm67\" (UniqueName: \"kubernetes.io/projected/eab91f0e-3c39-4096-8d91-329f7e9812e8-kube-api-access-zhm67\") pod \"eab91f0e-3c39-4096-8d91-329f7e9812e8\" (UID: \"eab91f0e-3c39-4096-8d91-329f7e9812e8\") " Dec 12 14:15:59 crc kubenswrapper[5113]: I1212 14:15:59.881288 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eab91f0e-3c39-4096-8d91-329f7e9812e8-catalog-content\") pod \"eab91f0e-3c39-4096-8d91-329f7e9812e8\" (UID: \"eab91f0e-3c39-4096-8d91-329f7e9812e8\") " Dec 12 14:15:59 crc kubenswrapper[5113]: I1212 14:15:59.882836 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eab91f0e-3c39-4096-8d91-329f7e9812e8-utilities" (OuterVolumeSpecName: "utilities") pod "eab91f0e-3c39-4096-8d91-329f7e9812e8" (UID: "eab91f0e-3c39-4096-8d91-329f7e9812e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:15:59 crc kubenswrapper[5113]: I1212 14:15:59.888382 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eab91f0e-3c39-4096-8d91-329f7e9812e8-kube-api-access-zhm67" (OuterVolumeSpecName: "kube-api-access-zhm67") pod "eab91f0e-3c39-4096-8d91-329f7e9812e8" (UID: "eab91f0e-3c39-4096-8d91-329f7e9812e8"). InnerVolumeSpecName "kube-api-access-zhm67". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:15:59 crc kubenswrapper[5113]: I1212 14:15:59.945290 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:15:59 crc kubenswrapper[5113]: I1212 14:15:59.982985 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eab91f0e-3c39-4096-8d91-329f7e9812e8-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:15:59 crc kubenswrapper[5113]: I1212 14:15:59.983023 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zhm67\" (UniqueName: \"kubernetes.io/projected/eab91f0e-3c39-4096-8d91-329f7e9812e8-kube-api-access-zhm67\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.008280 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eab91f0e-3c39-4096-8d91-329f7e9812e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eab91f0e-3c39-4096-8d91-329f7e9812e8" (UID: "eab91f0e-3c39-4096-8d91-329f7e9812e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.076612 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.084218 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90beb70e-da16-489d-b0c8-3ced9d98deea-catalog-content\") pod \"90beb70e-da16-489d-b0c8-3ced9d98deea\" (UID: \"90beb70e-da16-489d-b0c8-3ced9d98deea\") " Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.084381 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90beb70e-da16-489d-b0c8-3ced9d98deea-utilities\") pod \"90beb70e-da16-489d-b0c8-3ced9d98deea\" (UID: \"90beb70e-da16-489d-b0c8-3ced9d98deea\") " Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.084441 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhgjd\" (UniqueName: \"kubernetes.io/projected/90beb70e-da16-489d-b0c8-3ced9d98deea-kube-api-access-jhgjd\") pod \"90beb70e-da16-489d-b0c8-3ced9d98deea\" (UID: \"90beb70e-da16-489d-b0c8-3ced9d98deea\") " Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.084718 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eab91f0e-3c39-4096-8d91-329f7e9812e8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.085518 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90beb70e-da16-489d-b0c8-3ced9d98deea-utilities" (OuterVolumeSpecName: "utilities") pod "90beb70e-da16-489d-b0c8-3ced9d98deea" (UID: "90beb70e-da16-489d-b0c8-3ced9d98deea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.089503 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90beb70e-da16-489d-b0c8-3ced9d98deea-kube-api-access-jhgjd" (OuterVolumeSpecName: "kube-api-access-jhgjd") pod "90beb70e-da16-489d-b0c8-3ced9d98deea" (UID: "90beb70e-da16-489d-b0c8-3ced9d98deea"). InnerVolumeSpecName "kube-api-access-jhgjd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.158138 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90beb70e-da16-489d-b0c8-3ced9d98deea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90beb70e-da16-489d-b0c8-3ced9d98deea" (UID: "90beb70e-da16-489d-b0c8-3ced9d98deea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.198522 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-marketplace-trusted-ca\") pod \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.198617 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-marketplace-operator-metrics\") pod \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.198705 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s78v\" (UniqueName: \"kubernetes.io/projected/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-kube-api-access-2s78v\") pod \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.198752 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-tmp\") pod \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\" (UID: \"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7\") " Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.199067 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90beb70e-da16-489d-b0c8-3ced9d98deea-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.199083 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jhgjd\" (UniqueName: \"kubernetes.io/projected/90beb70e-da16-489d-b0c8-3ced9d98deea-kube-api-access-jhgjd\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.199096 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90beb70e-da16-489d-b0c8-3ced9d98deea-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.199387 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-tmp" (OuterVolumeSpecName: "tmp") pod "fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7" (UID: "fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.199601 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7" (UID: "fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.203652 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-kube-api-access-2s78v" (OuterVolumeSpecName: "kube-api-access-2s78v") pod "fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7" (UID: "fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7"). InnerVolumeSpecName "kube-api-access-2s78v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.203980 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7" (UID: "fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.300241 5113 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.300274 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2s78v\" (UniqueName: \"kubernetes.io/projected/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-kube-api-access-2s78v\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.300286 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.300297 5113 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.345556 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.349054 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.502654 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a2741c8-968d-4f77-8be2-35619f1b1f4d-utilities\") pod \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\" (UID: \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\") " Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.503028 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a2741c8-968d-4f77-8be2-35619f1b1f4d-catalog-content\") pod \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\" (UID: \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\") " Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.503222 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aefb209-096f-4d97-bbde-df22378e9c13-utilities\") pod \"7aefb209-096f-4d97-bbde-df22378e9c13\" (UID: \"7aefb209-096f-4d97-bbde-df22378e9c13\") " Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.503414 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aefb209-096f-4d97-bbde-df22378e9c13-catalog-content\") pod \"7aefb209-096f-4d97-bbde-df22378e9c13\" (UID: \"7aefb209-096f-4d97-bbde-df22378e9c13\") " Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.503535 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l577t\" (UniqueName: \"kubernetes.io/projected/7aefb209-096f-4d97-bbde-df22378e9c13-kube-api-access-l577t\") pod \"7aefb209-096f-4d97-bbde-df22378e9c13\" (UID: \"7aefb209-096f-4d97-bbde-df22378e9c13\") " Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.504106 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzfsf\" (UniqueName: \"kubernetes.io/projected/2a2741c8-968d-4f77-8be2-35619f1b1f4d-kube-api-access-jzfsf\") pod \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\" (UID: \"2a2741c8-968d-4f77-8be2-35619f1b1f4d\") " Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.504585 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a2741c8-968d-4f77-8be2-35619f1b1f4d-utilities" (OuterVolumeSpecName: "utilities") pod "2a2741c8-968d-4f77-8be2-35619f1b1f4d" (UID: "2a2741c8-968d-4f77-8be2-35619f1b1f4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.505737 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aefb209-096f-4d97-bbde-df22378e9c13-utilities" (OuterVolumeSpecName: "utilities") pod "7aefb209-096f-4d97-bbde-df22378e9c13" (UID: "7aefb209-096f-4d97-bbde-df22378e9c13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.509036 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a2741c8-968d-4f77-8be2-35619f1b1f4d-kube-api-access-jzfsf" (OuterVolumeSpecName: "kube-api-access-jzfsf") pod "2a2741c8-968d-4f77-8be2-35619f1b1f4d" (UID: "2a2741c8-968d-4f77-8be2-35619f1b1f4d"). InnerVolumeSpecName "kube-api-access-jzfsf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.511485 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aefb209-096f-4d97-bbde-df22378e9c13-kube-api-access-l577t" (OuterVolumeSpecName: "kube-api-access-l577t") pod "7aefb209-096f-4d97-bbde-df22378e9c13" (UID: "7aefb209-096f-4d97-bbde-df22378e9c13"). InnerVolumeSpecName "kube-api-access-l577t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.515999 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a2741c8-968d-4f77-8be2-35619f1b1f4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a2741c8-968d-4f77-8be2-35619f1b1f4d" (UID: "2a2741c8-968d-4f77-8be2-35619f1b1f4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.539043 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aefb209-096f-4d97-bbde-df22378e9c13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7aefb209-096f-4d97-bbde-df22378e9c13" (UID: "7aefb209-096f-4d97-bbde-df22378e9c13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.563493 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.563514 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-2qnd9" event={"ID":"fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7","Type":"ContainerDied","Data":"5f1d6cc8810956e3c33321ff5de89732427c4f6fa87bb72014e3c8f71828d590"} Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.563668 5113 scope.go:117] "RemoveContainer" containerID="8c24388c5d6967b229870fcddace94d2e822b8ed420985fe6700b7a4273ef32a" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.568549 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtkxn" event={"ID":"90beb70e-da16-489d-b0c8-3ced9d98deea","Type":"ContainerDied","Data":"be5c600ae31a3cd6598ccd87f0818ab59cad228dc1acb86f237c89f72be2587f"} Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.568583 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtkxn" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.571485 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8lq" event={"ID":"7aefb209-096f-4d97-bbde-df22378e9c13","Type":"ContainerDied","Data":"ac2cbedf98ccc74d505af2bbf40b1777cf16ba37ad53d64c4cd995cbeba5d2de"} Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.571523 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xk8lq" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.574494 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppmfs" event={"ID":"2a2741c8-968d-4f77-8be2-35619f1b1f4d","Type":"ContainerDied","Data":"3f21b0ae31c1042edeb6dc5bb326dbab1c7547525da060665d38bac7483ed7c4"} Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.574527 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppmfs" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.577722 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzcm5" event={"ID":"eab91f0e-3c39-4096-8d91-329f7e9812e8","Type":"ContainerDied","Data":"e546fa3a1ca3527794335b06ebb0b6b3c6d6bd683629ad8fddd17757a3f94a90"} Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.577805 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rzcm5" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.583511 5113 scope.go:117] "RemoveContainer" containerID="ea446cf8240020f64b053a7f202913cc32090ba323f88bb450cd10268fd6390f" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.607070 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-2qnd9"] Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.611158 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jzfsf\" (UniqueName: \"kubernetes.io/projected/2a2741c8-968d-4f77-8be2-35619f1b1f4d-kube-api-access-jzfsf\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.611193 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a2741c8-968d-4f77-8be2-35619f1b1f4d-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.611207 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a2741c8-968d-4f77-8be2-35619f1b1f4d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.611219 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aefb209-096f-4d97-bbde-df22378e9c13-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.611230 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aefb209-096f-4d97-bbde-df22378e9c13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.611242 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l577t\" (UniqueName: \"kubernetes.io/projected/7aefb209-096f-4d97-bbde-df22378e9c13-kube-api-access-l577t\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.614320 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-2qnd9"] Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.624826 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtkxn"] Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.625936 5113 scope.go:117] "RemoveContainer" containerID="4b83e0198c131bc381e50856316d7406034e678893a5c7b30dba36dead8fba33" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.629575 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gtkxn"] Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.641635 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppmfs"] Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.649632 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppmfs"] Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.652820 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rzcm5"] Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.659091 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rzcm5"] Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.664788 5113 scope.go:117] "RemoveContainer" containerID="dbc797a03724888ddaa10351dd8f84898eda3d26537c77e2851beed2358a2eb5" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.666000 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xk8lq"] Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.669132 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xk8lq"] Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.677377 5113 scope.go:117] "RemoveContainer" containerID="f56fb335b7c839cfa4be9a9d5149e3549d706c4dfd372dd8780bd94d26dd6c7a" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.689615 5113 scope.go:117] "RemoveContainer" containerID="c8c8fc8fc16b07bc09ec467f02574390e15db7a439aa4c6ebf5010b9f97d9fe6" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.702837 5113 scope.go:117] "RemoveContainer" containerID="c1a3b6741c9fad6520ebbd4700c32a6d4b61d3b7f5173e18afaf332c7e627ccd" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.716199 5113 scope.go:117] "RemoveContainer" containerID="685dc44faab54c18fa0de2fc7a49b9453cbef7bed5e68a0546135d7457c24b29" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.729918 5113 scope.go:117] "RemoveContainer" containerID="9b3e8977e14fcffca80d8396e99649f6c805c6b160bafd81697572ccfc1fc69e" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.746230 5113 scope.go:117] "RemoveContainer" containerID="24f8f30cd9cadfe14d5107c82dada9167184b02eb53b1adb21b5f05185512d60" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.759415 5113 scope.go:117] "RemoveContainer" containerID="4d4078a33f54ea983811fb11ea9a645d1675d690c475410b84b353c0431b9d0a" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.772446 5113 scope.go:117] "RemoveContainer" containerID="01ce2823f9f94fcb385b48f282f113e519372b3183ee51df207f46f51c06e497" Dec 12 14:16:00 crc kubenswrapper[5113]: I1212 14:16:00.793166 5113 scope.go:117] "RemoveContainer" containerID="c2034757802f2a8afa5c9e928b4a81beca9b0467a728c6e51d4b0d86680b8722" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.322465 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7kg5v"] Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323033 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7aefb209-096f-4d97-bbde-df22378e9c13" containerName="extract-utilities" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323058 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aefb209-096f-4d97-bbde-df22378e9c13" containerName="extract-utilities" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323069 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7aefb209-096f-4d97-bbde-df22378e9c13" containerName="extract-content" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323075 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aefb209-096f-4d97-bbde-df22378e9c13" containerName="extract-content" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323084 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7aefb209-096f-4d97-bbde-df22378e9c13" containerName="registry-server" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323091 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aefb209-096f-4d97-bbde-df22378e9c13" containerName="registry-server" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323098 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eab91f0e-3c39-4096-8d91-329f7e9812e8" containerName="extract-utilities" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323105 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab91f0e-3c39-4096-8d91-329f7e9812e8" containerName="extract-utilities" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323136 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2a2741c8-968d-4f77-8be2-35619f1b1f4d" containerName="extract-content" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323143 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a2741c8-968d-4f77-8be2-35619f1b1f4d" containerName="extract-content" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323151 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7" containerName="marketplace-operator" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323156 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7" containerName="marketplace-operator" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323167 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eab91f0e-3c39-4096-8d91-329f7e9812e8" containerName="extract-content" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323172 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab91f0e-3c39-4096-8d91-329f7e9812e8" containerName="extract-content" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323179 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="90beb70e-da16-489d-b0c8-3ced9d98deea" containerName="registry-server" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323185 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="90beb70e-da16-489d-b0c8-3ced9d98deea" containerName="registry-server" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323192 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="90beb70e-da16-489d-b0c8-3ced9d98deea" containerName="extract-content" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323209 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="90beb70e-da16-489d-b0c8-3ced9d98deea" containerName="extract-content" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323217 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2a2741c8-968d-4f77-8be2-35619f1b1f4d" containerName="registry-server" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323222 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a2741c8-968d-4f77-8be2-35619f1b1f4d" containerName="registry-server" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323229 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eab91f0e-3c39-4096-8d91-329f7e9812e8" containerName="registry-server" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323234 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab91f0e-3c39-4096-8d91-329f7e9812e8" containerName="registry-server" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323242 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2a2741c8-968d-4f77-8be2-35619f1b1f4d" containerName="extract-utilities" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323247 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a2741c8-968d-4f77-8be2-35619f1b1f4d" containerName="extract-utilities" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323262 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="90beb70e-da16-489d-b0c8-3ced9d98deea" containerName="extract-utilities" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323266 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="90beb70e-da16-489d-b0c8-3ced9d98deea" containerName="extract-utilities" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323368 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="eab91f0e-3c39-4096-8d91-329f7e9812e8" containerName="registry-server" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323379 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="2a2741c8-968d-4f77-8be2-35619f1b1f4d" containerName="registry-server" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323385 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="90beb70e-da16-489d-b0c8-3ced9d98deea" containerName="registry-server" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323391 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="7aefb209-096f-4d97-bbde-df22378e9c13" containerName="registry-server" Dec 12 14:16:01 crc kubenswrapper[5113]: I1212 14:16:01.323401 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7" containerName="marketplace-operator" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.462614 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.466051 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.473796 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a2741c8-968d-4f77-8be2-35619f1b1f4d" path="/var/lib/kubelet/pods/2a2741c8-968d-4f77-8be2-35619f1b1f4d/volumes" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.474646 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aefb209-096f-4d97-bbde-df22378e9c13" path="/var/lib/kubelet/pods/7aefb209-096f-4d97-bbde-df22378e9c13/volumes" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.475393 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90beb70e-da16-489d-b0c8-3ced9d98deea" path="/var/lib/kubelet/pods/90beb70e-da16-489d-b0c8-3ced9d98deea/volumes" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.477015 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eab91f0e-3c39-4096-8d91-329f7e9812e8" path="/var/lib/kubelet/pods/eab91f0e-3c39-4096-8d91-329f7e9812e8/volumes" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.477868 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7" path="/var/lib/kubelet/pods/fe03d25b-e298-4d1b-8cf2-e62d9d80f1a7/volumes" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.480636 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kg5v"] Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.480676 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zsdgb"] Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.598679 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zsdgb"] Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.598773 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" event={"ID":"0508dd25-2dda-413d-9978-744802ef4487","Type":"ContainerStarted","Data":"59e67be3b01f39e6ecbb113d012bcdf4ab417652fa24a4096f399259384a0b39"} Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.598828 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.598897 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.601400 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.601934 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.622304 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-hkb9j" podStartSLOduration=4.622272322 podStartE2EDuration="4.622272322s" podCreationTimestamp="2025-12-12 14:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:16:02.620912228 +0000 UTC m=+345.456162105" watchObservedRunningTime="2025-12-12 14:16:02.622272322 +0000 UTC m=+345.457522149" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.641654 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4486b4b-7cef-4488-8b89-1b86e4d0621f-utilities\") pod \"redhat-marketplace-7kg5v\" (UID: \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\") " pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.641787 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pgm6\" (UniqueName: \"kubernetes.io/projected/b4486b4b-7cef-4488-8b89-1b86e4d0621f-kube-api-access-2pgm6\") pod \"redhat-marketplace-7kg5v\" (UID: \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\") " pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.641872 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4486b4b-7cef-4488-8b89-1b86e4d0621f-catalog-content\") pod \"redhat-marketplace-7kg5v\" (UID: \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\") " pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.743448 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4486b4b-7cef-4488-8b89-1b86e4d0621f-utilities\") pod \"redhat-marketplace-7kg5v\" (UID: \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\") " pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.743674 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db61dc6c-dfeb-4df7-999f-1e4dacde0ae0-utilities\") pod \"redhat-operators-zsdgb\" (UID: \"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0\") " pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.743839 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db61dc6c-dfeb-4df7-999f-1e4dacde0ae0-catalog-content\") pod \"redhat-operators-zsdgb\" (UID: \"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0\") " pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.744072 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxzlz\" (UniqueName: \"kubernetes.io/projected/db61dc6c-dfeb-4df7-999f-1e4dacde0ae0-kube-api-access-fxzlz\") pod \"redhat-operators-zsdgb\" (UID: \"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0\") " pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.744256 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2pgm6\" (UniqueName: \"kubernetes.io/projected/b4486b4b-7cef-4488-8b89-1b86e4d0621f-kube-api-access-2pgm6\") pod \"redhat-marketplace-7kg5v\" (UID: \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\") " pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.744175 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4486b4b-7cef-4488-8b89-1b86e4d0621f-utilities\") pod \"redhat-marketplace-7kg5v\" (UID: \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\") " pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.744382 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4486b4b-7cef-4488-8b89-1b86e4d0621f-catalog-content\") pod \"redhat-marketplace-7kg5v\" (UID: \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\") " pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.744786 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4486b4b-7cef-4488-8b89-1b86e4d0621f-catalog-content\") pod \"redhat-marketplace-7kg5v\" (UID: \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\") " pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.771459 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pgm6\" (UniqueName: \"kubernetes.io/projected/b4486b4b-7cef-4488-8b89-1b86e4d0621f-kube-api-access-2pgm6\") pod \"redhat-marketplace-7kg5v\" (UID: \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\") " pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.793753 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.848983 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db61dc6c-dfeb-4df7-999f-1e4dacde0ae0-utilities\") pod \"redhat-operators-zsdgb\" (UID: \"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0\") " pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.849095 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db61dc6c-dfeb-4df7-999f-1e4dacde0ae0-catalog-content\") pod \"redhat-operators-zsdgb\" (UID: \"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0\") " pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.849220 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fxzlz\" (UniqueName: \"kubernetes.io/projected/db61dc6c-dfeb-4df7-999f-1e4dacde0ae0-kube-api-access-fxzlz\") pod \"redhat-operators-zsdgb\" (UID: \"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0\") " pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.849604 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db61dc6c-dfeb-4df7-999f-1e4dacde0ae0-utilities\") pod \"redhat-operators-zsdgb\" (UID: \"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0\") " pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.849690 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db61dc6c-dfeb-4df7-999f-1e4dacde0ae0-catalog-content\") pod \"redhat-operators-zsdgb\" (UID: \"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0\") " pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.871441 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxzlz\" (UniqueName: \"kubernetes.io/projected/db61dc6c-dfeb-4df7-999f-1e4dacde0ae0-kube-api-access-fxzlz\") pod \"redhat-operators-zsdgb\" (UID: \"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0\") " pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:02 crc kubenswrapper[5113]: I1212 14:16:02.922850 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.141225 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kg5v"] Dec 12 14:16:03 crc kubenswrapper[5113]: W1212 14:16:03.148245 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4486b4b_7cef_4488_8b89_1b86e4d0621f.slice/crio-5749b9ce0cc22cc1110315516c2a8cd09cb3d0c124b77bbd9c420880dc2d7076 WatchSource:0}: Error finding container 5749b9ce0cc22cc1110315516c2a8cd09cb3d0c124b77bbd9c420880dc2d7076: Status 404 returned error can't find the container with id 5749b9ce0cc22cc1110315516c2a8cd09cb3d0c124b77bbd9c420880dc2d7076 Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.494215 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zsdgb"] Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.603495 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsdgb" event={"ID":"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0","Type":"ContainerStarted","Data":"e34a03a4d253958e1498b018d492ec99e602f159b85ccf377f81d281c4cba758"} Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.605038 5113 generic.go:358] "Generic (PLEG): container finished" podID="b4486b4b-7cef-4488-8b89-1b86e4d0621f" containerID="076b4fe1549320dd0c91e686ded5555d0985212f2fdbfb06374b2b4245f277a9" exitCode=0 Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.605170 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kg5v" event={"ID":"b4486b4b-7cef-4488-8b89-1b86e4d0621f","Type":"ContainerDied","Data":"076b4fe1549320dd0c91e686ded5555d0985212f2fdbfb06374b2b4245f277a9"} Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.605221 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kg5v" event={"ID":"b4486b4b-7cef-4488-8b89-1b86e4d0621f","Type":"ContainerStarted","Data":"5749b9ce0cc22cc1110315516c2a8cd09cb3d0c124b77bbd9c420880dc2d7076"} Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.726517 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m795q"] Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.741212 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m795q"] Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.741390 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.745296 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.877858 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a1b827-e759-45ce-9812-7417e28ff665-utilities\") pod \"community-operators-m795q\" (UID: \"50a1b827-e759-45ce-9812-7417e28ff665\") " pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.877951 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvd8j\" (UniqueName: \"kubernetes.io/projected/50a1b827-e759-45ce-9812-7417e28ff665-kube-api-access-cvd8j\") pod \"community-operators-m795q\" (UID: \"50a1b827-e759-45ce-9812-7417e28ff665\") " pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.877984 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a1b827-e759-45ce-9812-7417e28ff665-catalog-content\") pod \"community-operators-m795q\" (UID: \"50a1b827-e759-45ce-9812-7417e28ff665\") " pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.931143 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hm5hk"] Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.938741 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hm5hk"] Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.938979 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.941643 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.979086 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a1b827-e759-45ce-9812-7417e28ff665-utilities\") pod \"community-operators-m795q\" (UID: \"50a1b827-e759-45ce-9812-7417e28ff665\") " pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.979283 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cvd8j\" (UniqueName: \"kubernetes.io/projected/50a1b827-e759-45ce-9812-7417e28ff665-kube-api-access-cvd8j\") pod \"community-operators-m795q\" (UID: \"50a1b827-e759-45ce-9812-7417e28ff665\") " pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.979321 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a1b827-e759-45ce-9812-7417e28ff665-catalog-content\") pod \"community-operators-m795q\" (UID: \"50a1b827-e759-45ce-9812-7417e28ff665\") " pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.979746 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a1b827-e759-45ce-9812-7417e28ff665-utilities\") pod \"community-operators-m795q\" (UID: \"50a1b827-e759-45ce-9812-7417e28ff665\") " pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:03 crc kubenswrapper[5113]: I1212 14:16:03.980068 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a1b827-e759-45ce-9812-7417e28ff665-catalog-content\") pod \"community-operators-m795q\" (UID: \"50a1b827-e759-45ce-9812-7417e28ff665\") " pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.006109 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvd8j\" (UniqueName: \"kubernetes.io/projected/50a1b827-e759-45ce-9812-7417e28ff665-kube-api-access-cvd8j\") pod \"community-operators-m795q\" (UID: \"50a1b827-e759-45ce-9812-7417e28ff665\") " pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.081628 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72daa9fc-3e27-464a-8827-2a2fd488c3bc-utilities\") pod \"certified-operators-hm5hk\" (UID: \"72daa9fc-3e27-464a-8827-2a2fd488c3bc\") " pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.081930 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz2kz\" (UniqueName: \"kubernetes.io/projected/72daa9fc-3e27-464a-8827-2a2fd488c3bc-kube-api-access-xz2kz\") pod \"certified-operators-hm5hk\" (UID: \"72daa9fc-3e27-464a-8827-2a2fd488c3bc\") " pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.082032 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72daa9fc-3e27-464a-8827-2a2fd488c3bc-catalog-content\") pod \"certified-operators-hm5hk\" (UID: \"72daa9fc-3e27-464a-8827-2a2fd488c3bc\") " pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.157409 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.183564 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72daa9fc-3e27-464a-8827-2a2fd488c3bc-utilities\") pod \"certified-operators-hm5hk\" (UID: \"72daa9fc-3e27-464a-8827-2a2fd488c3bc\") " pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.183671 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xz2kz\" (UniqueName: \"kubernetes.io/projected/72daa9fc-3e27-464a-8827-2a2fd488c3bc-kube-api-access-xz2kz\") pod \"certified-operators-hm5hk\" (UID: \"72daa9fc-3e27-464a-8827-2a2fd488c3bc\") " pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.183724 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72daa9fc-3e27-464a-8827-2a2fd488c3bc-catalog-content\") pod \"certified-operators-hm5hk\" (UID: \"72daa9fc-3e27-464a-8827-2a2fd488c3bc\") " pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.184273 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72daa9fc-3e27-464a-8827-2a2fd488c3bc-utilities\") pod \"certified-operators-hm5hk\" (UID: \"72daa9fc-3e27-464a-8827-2a2fd488c3bc\") " pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.184310 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72daa9fc-3e27-464a-8827-2a2fd488c3bc-catalog-content\") pod \"certified-operators-hm5hk\" (UID: \"72daa9fc-3e27-464a-8827-2a2fd488c3bc\") " pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.210982 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz2kz\" (UniqueName: \"kubernetes.io/projected/72daa9fc-3e27-464a-8827-2a2fd488c3bc-kube-api-access-xz2kz\") pod \"certified-operators-hm5hk\" (UID: \"72daa9fc-3e27-464a-8827-2a2fd488c3bc\") " pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.270138 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.614557 5113 generic.go:358] "Generic (PLEG): container finished" podID="db61dc6c-dfeb-4df7-999f-1e4dacde0ae0" containerID="922e728b21162accec614356c60285e141788c8c9a3fed86ab3f6178b933dd46" exitCode=0 Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.614659 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsdgb" event={"ID":"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0","Type":"ContainerDied","Data":"922e728b21162accec614356c60285e141788c8c9a3fed86ab3f6178b933dd46"} Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.619979 5113 generic.go:358] "Generic (PLEG): container finished" podID="b4486b4b-7cef-4488-8b89-1b86e4d0621f" containerID="bf11e616071fc4ea4939fe109e2b9616f584f87cccd987da86f847e02c71d6ab" exitCode=0 Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.620232 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kg5v" event={"ID":"b4486b4b-7cef-4488-8b89-1b86e4d0621f","Type":"ContainerDied","Data":"bf11e616071fc4ea4939fe109e2b9616f584f87cccd987da86f847e02c71d6ab"} Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.680692 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m795q"] Dec 12 14:16:04 crc kubenswrapper[5113]: W1212 14:16:04.684942 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50a1b827_e759_45ce_9812_7417e28ff665.slice/crio-a385f67731ac276c71f12e9790286f74872d9de1cbbdb2f9143f4d8a9d087425 WatchSource:0}: Error finding container a385f67731ac276c71f12e9790286f74872d9de1cbbdb2f9143f4d8a9d087425: Status 404 returned error can't find the container with id a385f67731ac276c71f12e9790286f74872d9de1cbbdb2f9143f4d8a9d087425 Dec 12 14:16:04 crc kubenswrapper[5113]: I1212 14:16:04.750632 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hm5hk"] Dec 12 14:16:04 crc kubenswrapper[5113]: W1212 14:16:04.766435 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72daa9fc_3e27_464a_8827_2a2fd488c3bc.slice/crio-123b596b9ac56e549eeac875d3d60cf79400b5529274e7a2c65a2778d1ec532f WatchSource:0}: Error finding container 123b596b9ac56e549eeac875d3d60cf79400b5529274e7a2c65a2778d1ec532f: Status 404 returned error can't find the container with id 123b596b9ac56e549eeac875d3d60cf79400b5529274e7a2c65a2778d1ec532f Dec 12 14:16:05 crc kubenswrapper[5113]: I1212 14:16:05.628150 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kg5v" event={"ID":"b4486b4b-7cef-4488-8b89-1b86e4d0621f","Type":"ContainerStarted","Data":"162986bb6030b61ecaf086247f35270c22113b81340270df1953a9db877f09d9"} Dec 12 14:16:05 crc kubenswrapper[5113]: I1212 14:16:05.630594 5113 generic.go:358] "Generic (PLEG): container finished" podID="50a1b827-e759-45ce-9812-7417e28ff665" containerID="5d85d2a1e0d6c62ff4c52450a32c0dcf362c3dcd523fe670233ad966921de885" exitCode=0 Dec 12 14:16:05 crc kubenswrapper[5113]: I1212 14:16:05.630694 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m795q" event={"ID":"50a1b827-e759-45ce-9812-7417e28ff665","Type":"ContainerDied","Data":"5d85d2a1e0d6c62ff4c52450a32c0dcf362c3dcd523fe670233ad966921de885"} Dec 12 14:16:05 crc kubenswrapper[5113]: I1212 14:16:05.630720 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m795q" event={"ID":"50a1b827-e759-45ce-9812-7417e28ff665","Type":"ContainerStarted","Data":"a385f67731ac276c71f12e9790286f74872d9de1cbbdb2f9143f4d8a9d087425"} Dec 12 14:16:05 crc kubenswrapper[5113]: I1212 14:16:05.636131 5113 generic.go:358] "Generic (PLEG): container finished" podID="72daa9fc-3e27-464a-8827-2a2fd488c3bc" containerID="21ddeea266ff8fe5b8cf7afe12f830f55942bbf6b4a035c2caced6669f6d487d" exitCode=0 Dec 12 14:16:05 crc kubenswrapper[5113]: I1212 14:16:05.636208 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm5hk" event={"ID":"72daa9fc-3e27-464a-8827-2a2fd488c3bc","Type":"ContainerDied","Data":"21ddeea266ff8fe5b8cf7afe12f830f55942bbf6b4a035c2caced6669f6d487d"} Dec 12 14:16:05 crc kubenswrapper[5113]: I1212 14:16:05.636238 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm5hk" event={"ID":"72daa9fc-3e27-464a-8827-2a2fd488c3bc","Type":"ContainerStarted","Data":"123b596b9ac56e549eeac875d3d60cf79400b5529274e7a2c65a2778d1ec532f"} Dec 12 14:16:05 crc kubenswrapper[5113]: I1212 14:16:05.673387 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7kg5v" podStartSLOduration=4.084053147 podStartE2EDuration="4.673355984s" podCreationTimestamp="2025-12-12 14:16:01 +0000 UTC" firstStartedPulling="2025-12-12 14:16:03.608111969 +0000 UTC m=+346.443361796" lastFinishedPulling="2025-12-12 14:16:04.197414806 +0000 UTC m=+347.032664633" observedRunningTime="2025-12-12 14:16:05.64699035 +0000 UTC m=+348.482240187" watchObservedRunningTime="2025-12-12 14:16:05.673355984 +0000 UTC m=+348.508605821" Dec 12 14:16:06 crc kubenswrapper[5113]: I1212 14:16:06.670671 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsdgb" event={"ID":"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0","Type":"ContainerStarted","Data":"3ba94be49785dabfa715b069a7a0c6367ced3fc287a1ebdb513a4ec50c24b4d7"} Dec 12 14:16:06 crc kubenswrapper[5113]: I1212 14:16:06.674450 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm5hk" event={"ID":"72daa9fc-3e27-464a-8827-2a2fd488c3bc","Type":"ContainerStarted","Data":"4c94d32a83a6adb8b6729909d4b461877b44bc959094df17a55ed5f276bf1b85"} Dec 12 14:16:07 crc kubenswrapper[5113]: I1212 14:16:07.683448 5113 generic.go:358] "Generic (PLEG): container finished" podID="72daa9fc-3e27-464a-8827-2a2fd488c3bc" containerID="4c94d32a83a6adb8b6729909d4b461877b44bc959094df17a55ed5f276bf1b85" exitCode=0 Dec 12 14:16:07 crc kubenswrapper[5113]: I1212 14:16:07.683543 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm5hk" event={"ID":"72daa9fc-3e27-464a-8827-2a2fd488c3bc","Type":"ContainerDied","Data":"4c94d32a83a6adb8b6729909d4b461877b44bc959094df17a55ed5f276bf1b85"} Dec 12 14:16:07 crc kubenswrapper[5113]: I1212 14:16:07.689449 5113 generic.go:358] "Generic (PLEG): container finished" podID="db61dc6c-dfeb-4df7-999f-1e4dacde0ae0" containerID="3ba94be49785dabfa715b069a7a0c6367ced3fc287a1ebdb513a4ec50c24b4d7" exitCode=0 Dec 12 14:16:07 crc kubenswrapper[5113]: I1212 14:16:07.689627 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsdgb" event={"ID":"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0","Type":"ContainerDied","Data":"3ba94be49785dabfa715b069a7a0c6367ced3fc287a1ebdb513a4ec50c24b4d7"} Dec 12 14:16:07 crc kubenswrapper[5113]: I1212 14:16:07.692169 5113 generic.go:358] "Generic (PLEG): container finished" podID="50a1b827-e759-45ce-9812-7417e28ff665" containerID="94197b07e206299acf6608549c7eaebfcadde2b39779abca8600f136c079e965" exitCode=0 Dec 12 14:16:07 crc kubenswrapper[5113]: I1212 14:16:07.692211 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m795q" event={"ID":"50a1b827-e759-45ce-9812-7417e28ff665","Type":"ContainerDied","Data":"94197b07e206299acf6608549c7eaebfcadde2b39779abca8600f136c079e965"} Dec 12 14:16:08 crc kubenswrapper[5113]: I1212 14:16:08.700790 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsdgb" event={"ID":"db61dc6c-dfeb-4df7-999f-1e4dacde0ae0","Type":"ContainerStarted","Data":"b6d3873ecf787267a659f3f4ee0139e61dca9d8c5862a7ebf74e87a5ce1c4614"} Dec 12 14:16:08 crc kubenswrapper[5113]: I1212 14:16:08.703769 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m795q" event={"ID":"50a1b827-e759-45ce-9812-7417e28ff665","Type":"ContainerStarted","Data":"b68c46271e16552acb42c1fa133778145801914ea722ed016e045a6eb14de618"} Dec 12 14:16:08 crc kubenswrapper[5113]: I1212 14:16:08.709677 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm5hk" event={"ID":"72daa9fc-3e27-464a-8827-2a2fd488c3bc","Type":"ContainerStarted","Data":"4b76e8d7e0d6f152d80da517a4b1528c52d2eeb716e8cba3aaae3eb672e62efe"} Dec 12 14:16:08 crc kubenswrapper[5113]: I1212 14:16:08.721812 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zsdgb" podStartSLOduration=6.536634108 podStartE2EDuration="7.721798461s" podCreationTimestamp="2025-12-12 14:16:01 +0000 UTC" firstStartedPulling="2025-12-12 14:16:04.616046102 +0000 UTC m=+347.451295929" lastFinishedPulling="2025-12-12 14:16:05.801210455 +0000 UTC m=+348.636460282" observedRunningTime="2025-12-12 14:16:08.715775758 +0000 UTC m=+351.551025595" watchObservedRunningTime="2025-12-12 14:16:08.721798461 +0000 UTC m=+351.557048288" Dec 12 14:16:08 crc kubenswrapper[5113]: I1212 14:16:08.744244 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hm5hk" podStartSLOduration=5.020142739 podStartE2EDuration="5.744222549s" podCreationTimestamp="2025-12-12 14:16:03 +0000 UTC" firstStartedPulling="2025-12-12 14:16:05.636922798 +0000 UTC m=+348.472172625" lastFinishedPulling="2025-12-12 14:16:06.361002608 +0000 UTC m=+349.196252435" observedRunningTime="2025-12-12 14:16:08.742295397 +0000 UTC m=+351.577545234" watchObservedRunningTime="2025-12-12 14:16:08.744222549 +0000 UTC m=+351.579472376" Dec 12 14:16:08 crc kubenswrapper[5113]: I1212 14:16:08.771405 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m795q" podStartSLOduration=4.757966079 podStartE2EDuration="5.771387138s" podCreationTimestamp="2025-12-12 14:16:03 +0000 UTC" firstStartedPulling="2025-12-12 14:16:05.631380141 +0000 UTC m=+348.466629978" lastFinishedPulling="2025-12-12 14:16:06.64480121 +0000 UTC m=+349.480051037" observedRunningTime="2025-12-12 14:16:08.765737587 +0000 UTC m=+351.600987444" watchObservedRunningTime="2025-12-12 14:16:08.771387138 +0000 UTC m=+351.606636965" Dec 12 14:16:12 crc kubenswrapper[5113]: I1212 14:16:12.794091 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:12 crc kubenswrapper[5113]: I1212 14:16:12.794698 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:12 crc kubenswrapper[5113]: I1212 14:16:12.846457 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:12 crc kubenswrapper[5113]: I1212 14:16:12.923881 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:12 crc kubenswrapper[5113]: I1212 14:16:12.923947 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:12 crc kubenswrapper[5113]: I1212 14:16:12.972757 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:13 crc kubenswrapper[5113]: I1212 14:16:13.823437 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:16:13 crc kubenswrapper[5113]: I1212 14:16:13.827227 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zsdgb" Dec 12 14:16:14 crc kubenswrapper[5113]: I1212 14:16:14.158113 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:14 crc kubenswrapper[5113]: I1212 14:16:14.158404 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:14 crc kubenswrapper[5113]: I1212 14:16:14.197071 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:14 crc kubenswrapper[5113]: I1212 14:16:14.270508 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:14 crc kubenswrapper[5113]: I1212 14:16:14.270551 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:14 crc kubenswrapper[5113]: I1212 14:16:14.310736 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:14 crc kubenswrapper[5113]: I1212 14:16:14.826893 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m795q" Dec 12 14:16:14 crc kubenswrapper[5113]: I1212 14:16:14.833052 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hm5hk" Dec 12 14:16:17 crc kubenswrapper[5113]: I1212 14:16:17.530082 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-dlxjn" Dec 12 14:16:17 crc kubenswrapper[5113]: I1212 14:16:17.582097 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-cd7rw"] Dec 12 14:16:42 crc kubenswrapper[5113]: I1212 14:16:42.624641 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" podUID="87300cd0-fd46-44e7-9925-c8cf3322b686" containerName="registry" containerID="cri-o://b5d43bab7023565341d21d43e30dd246743af82b2da82fcdb4c8ea226c1b15bd" gracePeriod=30 Dec 12 14:16:42 crc kubenswrapper[5113]: I1212 14:16:42.960494 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.042764 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87300cd0-fd46-44e7-9925-c8cf3322b686-installation-pull-secrets\") pod \"87300cd0-fd46-44e7-9925-c8cf3322b686\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.042818 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87300cd0-fd46-44e7-9925-c8cf3322b686-trusted-ca\") pod \"87300cd0-fd46-44e7-9925-c8cf3322b686\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.043087 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"87300cd0-fd46-44e7-9925-c8cf3322b686\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.043164 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5lvp\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-kube-api-access-v5lvp\") pod \"87300cd0-fd46-44e7-9925-c8cf3322b686\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.043209 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87300cd0-fd46-44e7-9925-c8cf3322b686-ca-trust-extracted\") pod \"87300cd0-fd46-44e7-9925-c8cf3322b686\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.043280 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-certificates\") pod \"87300cd0-fd46-44e7-9925-c8cf3322b686\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.043296 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-tls\") pod \"87300cd0-fd46-44e7-9925-c8cf3322b686\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.043349 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-bound-sa-token\") pod \"87300cd0-fd46-44e7-9925-c8cf3322b686\" (UID: \"87300cd0-fd46-44e7-9925-c8cf3322b686\") " Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.045250 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "87300cd0-fd46-44e7-9925-c8cf3322b686" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.045489 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87300cd0-fd46-44e7-9925-c8cf3322b686-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "87300cd0-fd46-44e7-9925-c8cf3322b686" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.052297 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "87300cd0-fd46-44e7-9925-c8cf3322b686" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.057278 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87300cd0-fd46-44e7-9925-c8cf3322b686-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "87300cd0-fd46-44e7-9925-c8cf3322b686" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.057592 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "87300cd0-fd46-44e7-9925-c8cf3322b686" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.057771 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "87300cd0-fd46-44e7-9925-c8cf3322b686" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.057776 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-kube-api-access-v5lvp" (OuterVolumeSpecName: "kube-api-access-v5lvp") pod "87300cd0-fd46-44e7-9925-c8cf3322b686" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686"). InnerVolumeSpecName "kube-api-access-v5lvp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.061675 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87300cd0-fd46-44e7-9925-c8cf3322b686-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "87300cd0-fd46-44e7-9925-c8cf3322b686" (UID: "87300cd0-fd46-44e7-9925-c8cf3322b686"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.061729 5113 generic.go:358] "Generic (PLEG): container finished" podID="87300cd0-fd46-44e7-9925-c8cf3322b686" containerID="b5d43bab7023565341d21d43e30dd246743af82b2da82fcdb4c8ea226c1b15bd" exitCode=0 Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.061765 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" event={"ID":"87300cd0-fd46-44e7-9925-c8cf3322b686","Type":"ContainerDied","Data":"b5d43bab7023565341d21d43e30dd246743af82b2da82fcdb4c8ea226c1b15bd"} Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.061810 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" event={"ID":"87300cd0-fd46-44e7-9925-c8cf3322b686","Type":"ContainerDied","Data":"a9a122761c42506f7ffc19b69805169cf2b3446332b21cdf0609ac308a7dd663"} Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.061831 5113 scope.go:117] "RemoveContainer" containerID="b5d43bab7023565341d21d43e30dd246743af82b2da82fcdb4c8ea226c1b15bd" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.061828 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-cd7rw" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.096435 5113 scope.go:117] "RemoveContainer" containerID="b5d43bab7023565341d21d43e30dd246743af82b2da82fcdb4c8ea226c1b15bd" Dec 12 14:16:43 crc kubenswrapper[5113]: E1212 14:16:43.098059 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d43bab7023565341d21d43e30dd246743af82b2da82fcdb4c8ea226c1b15bd\": container with ID starting with b5d43bab7023565341d21d43e30dd246743af82b2da82fcdb4c8ea226c1b15bd not found: ID does not exist" containerID="b5d43bab7023565341d21d43e30dd246743af82b2da82fcdb4c8ea226c1b15bd" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.098234 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d43bab7023565341d21d43e30dd246743af82b2da82fcdb4c8ea226c1b15bd"} err="failed to get container status \"b5d43bab7023565341d21d43e30dd246743af82b2da82fcdb4c8ea226c1b15bd\": rpc error: code = NotFound desc = could not find container \"b5d43bab7023565341d21d43e30dd246743af82b2da82fcdb4c8ea226c1b15bd\": container with ID starting with b5d43bab7023565341d21d43e30dd246743af82b2da82fcdb4c8ea226c1b15bd not found: ID does not exist" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.100858 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-cd7rw"] Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.106145 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-cd7rw"] Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.144876 5113 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87300cd0-fd46-44e7-9925-c8cf3322b686-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.144921 5113 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.144935 5113 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.144945 5113 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.144956 5113 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87300cd0-fd46-44e7-9925-c8cf3322b686-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.144966 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87300cd0-fd46-44e7-9925-c8cf3322b686-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.145059 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v5lvp\" (UniqueName: \"kubernetes.io/projected/87300cd0-fd46-44e7-9925-c8cf3322b686-kube-api-access-v5lvp\") on node \"crc\" DevicePath \"\"" Dec 12 14:16:43 crc kubenswrapper[5113]: I1212 14:16:43.491504 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87300cd0-fd46-44e7-9925-c8cf3322b686" path="/var/lib/kubelet/pods/87300cd0-fd46-44e7-9925-c8cf3322b686/volumes" Dec 12 14:17:50 crc kubenswrapper[5113]: I1212 14:17:50.902493 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:17:50 crc kubenswrapper[5113]: I1212 14:17:50.902972 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:18:20 crc kubenswrapper[5113]: I1212 14:18:20.935426 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:18:20 crc kubenswrapper[5113]: I1212 14:18:20.935930 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:18:50 crc kubenswrapper[5113]: I1212 14:18:50.902537 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:18:50 crc kubenswrapper[5113]: I1212 14:18:50.903112 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:18:50 crc kubenswrapper[5113]: I1212 14:18:50.903204 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:18:50 crc kubenswrapper[5113]: I1212 14:18:50.903974 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28d73d1a270b748c99c173d6242f40f212be4531ede388f7402b08f661b2e0f0"} pod="openshift-machine-config-operator/machine-config-daemon-5dn52" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:18:50 crc kubenswrapper[5113]: I1212 14:18:50.904044 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" containerID="cri-o://28d73d1a270b748c99c173d6242f40f212be4531ede388f7402b08f661b2e0f0" gracePeriod=600 Dec 12 14:18:51 crc kubenswrapper[5113]: I1212 14:18:51.804114 5113 generic.go:358] "Generic (PLEG): container finished" podID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerID="28d73d1a270b748c99c173d6242f40f212be4531ede388f7402b08f661b2e0f0" exitCode=0 Dec 12 14:18:51 crc kubenswrapper[5113]: I1212 14:18:51.804168 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerDied","Data":"28d73d1a270b748c99c173d6242f40f212be4531ede388f7402b08f661b2e0f0"} Dec 12 14:18:51 crc kubenswrapper[5113]: I1212 14:18:51.804504 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerStarted","Data":"3e968de37b04629c3ca728af2c0b097332111db0ae21e944a78534845c463d37"} Dec 12 14:18:51 crc kubenswrapper[5113]: I1212 14:18:51.804522 5113 scope.go:117] "RemoveContainer" containerID="0e7e34f9b9a8d598a4a4dc4523ac4110df25d8397f7991be7de07cc59bc98748" Dec 12 14:19:17 crc kubenswrapper[5113]: I1212 14:19:17.755903 5113 scope.go:117] "RemoveContainer" containerID="75c3b32fe0c5589c7d1375815e6bd6dd4c2c2d37126258a87da64d9571e06915" Dec 12 14:20:18 crc kubenswrapper[5113]: I1212 14:20:18.227362 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:20:18 crc kubenswrapper[5113]: I1212 14:20:18.233773 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:20:45 crc kubenswrapper[5113]: I1212 14:20:45.456967 5113 ???:1] "http: TLS handshake error from 192.168.126.11:57236: no serving certificate available for the kubelet" Dec 12 14:21:17 crc kubenswrapper[5113]: I1212 14:21:17.810930 5113 scope.go:117] "RemoveContainer" containerID="ea0a59221452e97b0adf3b7a50c25b59032d2dafc49beb916d6545fd9984eeda" Dec 12 14:21:20 crc kubenswrapper[5113]: I1212 14:21:20.901355 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:21:20 crc kubenswrapper[5113]: I1212 14:21:20.901696 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:21:48 crc kubenswrapper[5113]: I1212 14:21:48.818765 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l"] Dec 12 14:21:48 crc kubenswrapper[5113]: I1212 14:21:48.819653 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" podUID="e7fc971e-760a-4530-b3b2-7975699b4383" containerName="kube-rbac-proxy" containerID="cri-o://4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f" gracePeriod=30 Dec 12 14:21:48 crc kubenswrapper[5113]: I1212 14:21:48.819686 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" podUID="e7fc971e-760a-4530-b3b2-7975699b4383" containerName="ovnkube-cluster-manager" containerID="cri-o://eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836" gracePeriod=30 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.027535 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.044353 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4qrsn"] Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.045204 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="ovn-controller" containerID="cri-o://ceba849bd0445b0fb2ea2c10866e8c04cdb72059e168a7aa59f86db129ad709b" gracePeriod=30 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.045272 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="sbdb" containerID="cri-o://16651a31482f1c0cd6fe233f3183f63d8e017c1471ce1d233cf2547ae35770a0" gracePeriod=30 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.045333 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="northd" containerID="cri-o://898e119131ccbd53d3b08653df43ed219b33f88ba94eed34e8afda71f84a7b81" gracePeriod=30 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.045247 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="nbdb" containerID="cri-o://a0fbb92681c46a42102eed9649761e19acecf2f81db190c81cd551fd57a33f7b" gracePeriod=30 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.045261 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="ovn-acl-logging" containerID="cri-o://0f6cbc907d43156329193d6afe6cbbf8751a75292df13623820b8e45fe47be0b" gracePeriod=30 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.045280 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="kube-rbac-proxy-node" containerID="cri-o://aeb49bc4fec2311eb34b420f5efcaca8b3f5e898cf18eff29bd8d356a441ba52" gracePeriod=30 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.045223 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ecbd97312a89db090730ecb5c8d7d34608a6d785113daf82cfd6cad10384efcc" gracePeriod=30 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.053924 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb"] Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.056740 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7fc971e-760a-4530-b3b2-7975699b4383" containerName="ovnkube-cluster-manager" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.056780 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fc971e-760a-4530-b3b2-7975699b4383" containerName="ovnkube-cluster-manager" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.056803 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="87300cd0-fd46-44e7-9925-c8cf3322b686" containerName="registry" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.056812 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="87300cd0-fd46-44e7-9925-c8cf3322b686" containerName="registry" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.056843 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7fc971e-760a-4530-b3b2-7975699b4383" containerName="kube-rbac-proxy" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.056851 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fc971e-760a-4530-b3b2-7975699b4383" containerName="kube-rbac-proxy" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.056974 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="e7fc971e-760a-4530-b3b2-7975699b4383" containerName="kube-rbac-proxy" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.056988 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="87300cd0-fd46-44e7-9925-c8cf3322b686" containerName="registry" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.057002 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="e7fc971e-760a-4530-b3b2-7975699b4383" containerName="ovnkube-cluster-manager" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.075701 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="ovnkube-controller" containerID="cri-o://9237fb1c3f84937b9ed5ae24b5cd152e156c207d984f30022dbc059ffce50820" gracePeriod=30 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.148730 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.184733 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e7fc971e-760a-4530-b3b2-7975699b4383-ovn-control-plane-metrics-cert\") pod \"e7fc971e-760a-4530-b3b2-7975699b4383\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.184819 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e7fc971e-760a-4530-b3b2-7975699b4383-env-overrides\") pod \"e7fc971e-760a-4530-b3b2-7975699b4383\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.184864 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e7fc971e-760a-4530-b3b2-7975699b4383-ovnkube-config\") pod \"e7fc971e-760a-4530-b3b2-7975699b4383\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.184926 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwrnz\" (UniqueName: \"kubernetes.io/projected/e7fc971e-760a-4530-b3b2-7975699b4383-kube-api-access-lwrnz\") pod \"e7fc971e-760a-4530-b3b2-7975699b4383\" (UID: \"e7fc971e-760a-4530-b3b2-7975699b4383\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.187823 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7fc971e-760a-4530-b3b2-7975699b4383-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e7fc971e-760a-4530-b3b2-7975699b4383" (UID: "e7fc971e-760a-4530-b3b2-7975699b4383"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.188572 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7fc971e-760a-4530-b3b2-7975699b4383-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e7fc971e-760a-4530-b3b2-7975699b4383" (UID: "e7fc971e-760a-4530-b3b2-7975699b4383"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.192691 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7fc971e-760a-4530-b3b2-7975699b4383-kube-api-access-lwrnz" (OuterVolumeSpecName: "kube-api-access-lwrnz") pod "e7fc971e-760a-4530-b3b2-7975699b4383" (UID: "e7fc971e-760a-4530-b3b2-7975699b4383"). InnerVolumeSpecName "kube-api-access-lwrnz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.192720 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7fc971e-760a-4530-b3b2-7975699b4383-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "e7fc971e-760a-4530-b3b2-7975699b4383" (UID: "e7fc971e-760a-4530-b3b2-7975699b4383"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.286287 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ba83a93-adf6-4dcd-a173-3c359d3693f2-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-mcmhb\" (UID: \"9ba83a93-adf6-4dcd-a173-3c359d3693f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.286353 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ba83a93-adf6-4dcd-a173-3c359d3693f2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-mcmhb\" (UID: \"9ba83a93-adf6-4dcd-a173-3c359d3693f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.286611 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf2jq\" (UniqueName: \"kubernetes.io/projected/9ba83a93-adf6-4dcd-a173-3c359d3693f2-kube-api-access-bf2jq\") pod \"ovnkube-control-plane-97c9b6c48-mcmhb\" (UID: \"9ba83a93-adf6-4dcd-a173-3c359d3693f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.286721 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ba83a93-adf6-4dcd-a173-3c359d3693f2-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-mcmhb\" (UID: \"9ba83a93-adf6-4dcd-a173-3c359d3693f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.286844 5113 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e7fc971e-760a-4530-b3b2-7975699b4383-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.286862 5113 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e7fc971e-760a-4530-b3b2-7975699b4383-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.286872 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e7fc971e-760a-4530-b3b2-7975699b4383-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.286881 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lwrnz\" (UniqueName: \"kubernetes.io/projected/e7fc971e-760a-4530-b3b2-7975699b4383-kube-api-access-lwrnz\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.374566 5113 generic.go:358] "Generic (PLEG): container finished" podID="e7fc971e-760a-4530-b3b2-7975699b4383" containerID="eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836" exitCode=0 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.374598 5113 generic.go:358] "Generic (PLEG): container finished" podID="e7fc971e-760a-4530-b3b2-7975699b4383" containerID="4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f" exitCode=0 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.374702 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" event={"ID":"e7fc971e-760a-4530-b3b2-7975699b4383","Type":"ContainerDied","Data":"eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836"} Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.374714 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.374759 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" event={"ID":"e7fc971e-760a-4530-b3b2-7975699b4383","Type":"ContainerDied","Data":"4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f"} Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.374773 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l" event={"ID":"e7fc971e-760a-4530-b3b2-7975699b4383","Type":"ContainerDied","Data":"7cf40a6b81709e43acc2a332891bb48f8f453561f5b3780f289b8ec58d55e9c6"} Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.374794 5113 scope.go:117] "RemoveContainer" containerID="eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.381112 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4qrsn_74da26a1-71e9-47b4-bb18-cef44b9df055/ovn-acl-logging/0.log" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.381735 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4qrsn_74da26a1-71e9-47b4-bb18-cef44b9df055/ovn-controller/0.log" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.382817 5113 generic.go:358] "Generic (PLEG): container finished" podID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerID="9237fb1c3f84937b9ed5ae24b5cd152e156c207d984f30022dbc059ffce50820" exitCode=0 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.382846 5113 generic.go:358] "Generic (PLEG): container finished" podID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerID="16651a31482f1c0cd6fe233f3183f63d8e017c1471ce1d233cf2547ae35770a0" exitCode=0 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.382853 5113 generic.go:358] "Generic (PLEG): container finished" podID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerID="a0fbb92681c46a42102eed9649761e19acecf2f81db190c81cd551fd57a33f7b" exitCode=0 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.382863 5113 generic.go:358] "Generic (PLEG): container finished" podID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerID="898e119131ccbd53d3b08653df43ed219b33f88ba94eed34e8afda71f84a7b81" exitCode=0 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.382869 5113 generic.go:358] "Generic (PLEG): container finished" podID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerID="ecbd97312a89db090730ecb5c8d7d34608a6d785113daf82cfd6cad10384efcc" exitCode=0 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.382875 5113 generic.go:358] "Generic (PLEG): container finished" podID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerID="aeb49bc4fec2311eb34b420f5efcaca8b3f5e898cf18eff29bd8d356a441ba52" exitCode=0 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.382883 5113 generic.go:358] "Generic (PLEG): container finished" podID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerID="0f6cbc907d43156329193d6afe6cbbf8751a75292df13623820b8e45fe47be0b" exitCode=143 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.382891 5113 generic.go:358] "Generic (PLEG): container finished" podID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerID="ceba849bd0445b0fb2ea2c10866e8c04cdb72059e168a7aa59f86db129ad709b" exitCode=143 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.382914 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerDied","Data":"9237fb1c3f84937b9ed5ae24b5cd152e156c207d984f30022dbc059ffce50820"} Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.382949 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerDied","Data":"16651a31482f1c0cd6fe233f3183f63d8e017c1471ce1d233cf2547ae35770a0"} Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.382963 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerDied","Data":"a0fbb92681c46a42102eed9649761e19acecf2f81db190c81cd551fd57a33f7b"} Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.382975 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerDied","Data":"898e119131ccbd53d3b08653df43ed219b33f88ba94eed34e8afda71f84a7b81"} Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.383015 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerDied","Data":"ecbd97312a89db090730ecb5c8d7d34608a6d785113daf82cfd6cad10384efcc"} Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.383029 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerDied","Data":"aeb49bc4fec2311eb34b420f5efcaca8b3f5e898cf18eff29bd8d356a441ba52"} Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.383082 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerDied","Data":"0f6cbc907d43156329193d6afe6cbbf8751a75292df13623820b8e45fe47be0b"} Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.383099 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerDied","Data":"ceba849bd0445b0fb2ea2c10866e8c04cdb72059e168a7aa59f86db129ad709b"} Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.384786 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hnmf9_f61630ce-4572-40eb-b245-937168ad79d4/kube-multus/0.log" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.384828 5113 generic.go:358] "Generic (PLEG): container finished" podID="f61630ce-4572-40eb-b245-937168ad79d4" containerID="1a9d9ed7a0744ebcd9ff37c8fac9b74ab9642f0b437e4772adf7a2837e766b3b" exitCode=2 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.384927 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hnmf9" event={"ID":"f61630ce-4572-40eb-b245-937168ad79d4","Type":"ContainerDied","Data":"1a9d9ed7a0744ebcd9ff37c8fac9b74ab9642f0b437e4772adf7a2837e766b3b"} Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.385484 5113 scope.go:117] "RemoveContainer" containerID="1a9d9ed7a0744ebcd9ff37c8fac9b74ab9642f0b437e4772adf7a2837e766b3b" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.388198 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bf2jq\" (UniqueName: \"kubernetes.io/projected/9ba83a93-adf6-4dcd-a173-3c359d3693f2-kube-api-access-bf2jq\") pod \"ovnkube-control-plane-97c9b6c48-mcmhb\" (UID: \"9ba83a93-adf6-4dcd-a173-3c359d3693f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.388248 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ba83a93-adf6-4dcd-a173-3c359d3693f2-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-mcmhb\" (UID: \"9ba83a93-adf6-4dcd-a173-3c359d3693f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.388298 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ba83a93-adf6-4dcd-a173-3c359d3693f2-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-mcmhb\" (UID: \"9ba83a93-adf6-4dcd-a173-3c359d3693f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.388334 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ba83a93-adf6-4dcd-a173-3c359d3693f2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-mcmhb\" (UID: \"9ba83a93-adf6-4dcd-a173-3c359d3693f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.389168 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9ba83a93-adf6-4dcd-a173-3c359d3693f2-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-mcmhb\" (UID: \"9ba83a93-adf6-4dcd-a173-3c359d3693f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.389193 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9ba83a93-adf6-4dcd-a173-3c359d3693f2-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-mcmhb\" (UID: \"9ba83a93-adf6-4dcd-a173-3c359d3693f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.391884 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ba83a93-adf6-4dcd-a173-3c359d3693f2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-mcmhb\" (UID: \"9ba83a93-adf6-4dcd-a173-3c359d3693f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.395253 5113 scope.go:117] "RemoveContainer" containerID="4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.395835 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.407050 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf2jq\" (UniqueName: \"kubernetes.io/projected/9ba83a93-adf6-4dcd-a173-3c359d3693f2-kube-api-access-bf2jq\") pod \"ovnkube-control-plane-97c9b6c48-mcmhb\" (UID: \"9ba83a93-adf6-4dcd-a173-3c359d3693f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.521303 5113 scope.go:117] "RemoveContainer" containerID="eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836" Dec 12 14:21:49 crc kubenswrapper[5113]: E1212 14:21:49.528056 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836\": container with ID starting with eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836 not found: ID does not exist" containerID="eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.528111 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836"} err="failed to get container status \"eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836\": rpc error: code = NotFound desc = could not find container \"eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836\": container with ID starting with eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836 not found: ID does not exist" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.528160 5113 scope.go:117] "RemoveContainer" containerID="4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f" Dec 12 14:21:49 crc kubenswrapper[5113]: E1212 14:21:49.528563 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f\": container with ID starting with 4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f not found: ID does not exist" containerID="4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.528609 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f"} err="failed to get container status \"4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f\": rpc error: code = NotFound desc = could not find container \"4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f\": container with ID starting with 4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f not found: ID does not exist" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.528642 5113 scope.go:117] "RemoveContainer" containerID="eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.529363 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836"} err="failed to get container status \"eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836\": rpc error: code = NotFound desc = could not find container \"eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836\": container with ID starting with eac77587b7081d8abfa8dd3acff61b87583e06e3482f272ddfe3c1a2b0799836 not found: ID does not exist" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.529384 5113 scope.go:117] "RemoveContainer" containerID="4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.529609 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f"} err="failed to get container status \"4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f\": rpc error: code = NotFound desc = could not find container \"4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f\": container with ID starting with 4ad1cfeee55bb37745e0abcb54bce246fae0310e2d4b56782f074325af4a787f not found: ID does not exist" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.536210 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l"] Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.536246 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9hc2l"] Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.609542 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" Dec 12 14:21:49 crc kubenswrapper[5113]: W1212 14:21:49.636442 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ba83a93_adf6_4dcd_a173_3c359d3693f2.slice/crio-5e8db07415b539e918fc930868b2b4a23454d266ab0386de1d9250b4d925b7f4 WatchSource:0}: Error finding container 5e8db07415b539e918fc930868b2b4a23454d266ab0386de1d9250b4d925b7f4: Status 404 returned error can't find the container with id 5e8db07415b539e918fc930868b2b4a23454d266ab0386de1d9250b4d925b7f4 Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.720481 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4qrsn_74da26a1-71e9-47b4-bb18-cef44b9df055/ovn-acl-logging/0.log" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.721083 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4qrsn_74da26a1-71e9-47b4-bb18-cef44b9df055/ovn-controller/0.log" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.721539 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801131 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m66r9"] Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801628 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="ovn-acl-logging" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801644 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="ovn-acl-logging" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801660 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="sbdb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801666 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="sbdb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801674 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="northd" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801680 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="northd" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801701 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="kubecfg-setup" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801707 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="kubecfg-setup" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801722 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="nbdb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801727 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="nbdb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801735 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="ovn-controller" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801740 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="ovn-controller" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801747 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801754 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801764 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="kube-rbac-proxy-node" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801769 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="kube-rbac-proxy-node" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801775 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="ovnkube-controller" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801781 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="ovnkube-controller" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801856 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="sbdb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801867 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="kube-rbac-proxy-node" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801875 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="ovn-acl-logging" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801885 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801892 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="ovn-controller" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801899 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="northd" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801905 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="ovnkube-controller" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.801911 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" containerName="nbdb" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.821425 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833471 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-node-log\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833502 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-var-lib-cni-networks-ovn-kubernetes\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833518 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-var-lib-openvswitch\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833544 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhr7d\" (UniqueName: \"kubernetes.io/projected/74da26a1-71e9-47b4-bb18-cef44b9df055-kube-api-access-zhr7d\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833568 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-kubelet\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833586 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-env-overrides\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833617 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-run-ovn-kubernetes\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833631 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-etc-openvswitch\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833655 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-run-netns\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833681 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-cni-bin\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833695 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-log-socket\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833737 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-script-lib\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833751 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-cni-netd\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833794 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-systemd-units\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833815 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-slash\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833855 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-openvswitch\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833877 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-systemd\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833891 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-config\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833935 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74da26a1-71e9-47b4-bb18-cef44b9df055-ovn-node-metrics-cert\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.833971 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-ovn\") pod \"74da26a1-71e9-47b4-bb18-cef44b9df055\" (UID: \"74da26a1-71e9-47b4-bb18-cef44b9df055\") " Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834068 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-var-lib-openvswitch\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834090 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-kubelet\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834107 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-cni-netd\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834135 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-log-socket\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834162 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqvwm\" (UniqueName: \"kubernetes.io/projected/d50d43a5-ee44-403c-bfa1-8093fec2788d-kube-api-access-lqvwm\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834191 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d50d43a5-ee44-403c-bfa1-8093fec2788d-ovn-node-metrics-cert\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834210 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-run-ovn-kubernetes\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834227 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834250 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d50d43a5-ee44-403c-bfa1-8093fec2788d-env-overrides\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834272 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-node-log\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834316 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-run-netns\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834334 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-etc-openvswitch\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834352 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-run-systemd\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834367 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-run-ovn\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834384 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-cni-bin\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834399 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d50d43a5-ee44-403c-bfa1-8093fec2788d-ovnkube-config\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834424 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-slash\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834443 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-systemd-units\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.834458 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-run-openvswitch\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.838582 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d50d43a5-ee44-403c-bfa1-8093fec2788d-ovnkube-script-lib\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.840333 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.840333 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-node-log" (OuterVolumeSpecName: "node-log") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.840366 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.840406 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.840842 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.840880 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.840907 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.840925 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.840945 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.840967 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-log-socket" (OuterVolumeSpecName: "log-socket") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.841303 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.841330 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.841353 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.841374 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-slash" (OuterVolumeSpecName: "host-slash") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.841391 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.841926 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.842351 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.856074 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74da26a1-71e9-47b4-bb18-cef44b9df055-kube-api-access-zhr7d" (OuterVolumeSpecName: "kube-api-access-zhr7d") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "kube-api-access-zhr7d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.873254 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.873459 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74da26a1-71e9-47b4-bb18-cef44b9df055-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "74da26a1-71e9-47b4-bb18-cef44b9df055" (UID: "74da26a1-71e9-47b4-bb18-cef44b9df055"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.942916 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-var-lib-openvswitch\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.942976 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-kubelet\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.943002 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-cni-netd\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.943023 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-log-socket\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.943159 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-kubelet\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.943638 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-cni-netd\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.943664 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-var-lib-openvswitch\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.943698 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-log-socket\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.943571 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lqvwm\" (UniqueName: \"kubernetes.io/projected/d50d43a5-ee44-403c-bfa1-8093fec2788d-kube-api-access-lqvwm\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.943998 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d50d43a5-ee44-403c-bfa1-8093fec2788d-ovn-node-metrics-cert\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944027 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-run-ovn-kubernetes\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944051 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944079 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d50d43a5-ee44-403c-bfa1-8093fec2788d-env-overrides\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944105 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-node-log\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944168 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-run-netns\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944193 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-etc-openvswitch\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944215 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-run-systemd\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944234 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-run-ovn\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944258 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-cni-bin\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944277 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d50d43a5-ee44-403c-bfa1-8093fec2788d-ovnkube-config\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944306 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-slash\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944328 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-systemd-units\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944344 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-run-openvswitch\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944366 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d50d43a5-ee44-403c-bfa1-8093fec2788d-ovnkube-script-lib\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944433 5113 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944445 5113 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-node-log\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944453 5113 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944462 5113 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944470 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zhr7d\" (UniqueName: \"kubernetes.io/projected/74da26a1-71e9-47b4-bb18-cef44b9df055-kube-api-access-zhr7d\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944479 5113 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944488 5113 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944497 5113 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944505 5113 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944512 5113 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944521 5113 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944529 5113 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-log-socket\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944537 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944545 5113 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944553 5113 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944561 5113 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-host-slash\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944569 5113 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944576 5113 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74da26a1-71e9-47b4-bb18-cef44b9df055-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944584 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74da26a1-71e9-47b4-bb18-cef44b9df055-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944592 5113 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74da26a1-71e9-47b4-bb18-cef44b9df055-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.944921 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-etc-openvswitch\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.945288 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-node-log\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.945354 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-run-ovn-kubernetes\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.945393 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-run-netns\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.945689 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d50d43a5-ee44-403c-bfa1-8093fec2788d-ovnkube-script-lib\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.945727 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-slash\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.945751 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-systemd-units\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.945774 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-run-openvswitch\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.945851 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-run-ovn\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.945921 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-run-systemd\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.945989 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.946010 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d50d43a5-ee44-403c-bfa1-8093fec2788d-env-overrides\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.946047 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d50d43a5-ee44-403c-bfa1-8093fec2788d-host-cni-bin\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.946159 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d50d43a5-ee44-403c-bfa1-8093fec2788d-ovnkube-config\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.950774 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d50d43a5-ee44-403c-bfa1-8093fec2788d-ovn-node-metrics-cert\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:49 crc kubenswrapper[5113]: I1212 14:21:49.961767 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqvwm\" (UniqueName: \"kubernetes.io/projected/d50d43a5-ee44-403c-bfa1-8093fec2788d-kube-api-access-lqvwm\") pod \"ovnkube-node-m66r9\" (UID: \"d50d43a5-ee44-403c-bfa1-8093fec2788d\") " pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.148844 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:21:50 crc kubenswrapper[5113]: W1212 14:21:50.166437 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd50d43a5_ee44_403c_bfa1_8093fec2788d.slice/crio-23acb1d5ea5ebbf6fc7d1b180d9b02f7563c2c17eda4869db432ce3ad6f56246 WatchSource:0}: Error finding container 23acb1d5ea5ebbf6fc7d1b180d9b02f7563c2c17eda4869db432ce3ad6f56246: Status 404 returned error can't find the container with id 23acb1d5ea5ebbf6fc7d1b180d9b02f7563c2c17eda4869db432ce3ad6f56246 Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.393691 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4qrsn_74da26a1-71e9-47b4-bb18-cef44b9df055/ovn-acl-logging/0.log" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.394470 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4qrsn_74da26a1-71e9-47b4-bb18-cef44b9df055/ovn-controller/0.log" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.395048 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" event={"ID":"74da26a1-71e9-47b4-bb18-cef44b9df055","Type":"ContainerDied","Data":"2ae16c94c73243550c035e4b50ab531c22c8c260a9e248c74642bfaa87abbde8"} Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.395140 5113 scope.go:117] "RemoveContainer" containerID="9237fb1c3f84937b9ed5ae24b5cd152e156c207d984f30022dbc059ffce50820" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.395204 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4qrsn" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.402269 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hnmf9_f61630ce-4572-40eb-b245-937168ad79d4/kube-multus/0.log" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.402399 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hnmf9" event={"ID":"f61630ce-4572-40eb-b245-937168ad79d4","Type":"ContainerStarted","Data":"6e6ef0b98bb1b6450c107564100ff5068bc8d8e99ef742e07575f653e9ccdffd"} Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.404992 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" event={"ID":"9ba83a93-adf6-4dcd-a173-3c359d3693f2","Type":"ContainerStarted","Data":"73fcdba8beeba52a2b209688b8f988218090b09ff98c11edfb7f030f6cc72be2"} Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.405022 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" event={"ID":"9ba83a93-adf6-4dcd-a173-3c359d3693f2","Type":"ContainerStarted","Data":"5e8db07415b539e918fc930868b2b4a23454d266ab0386de1d9250b4d925b7f4"} Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.406332 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" event={"ID":"d50d43a5-ee44-403c-bfa1-8093fec2788d","Type":"ContainerStarted","Data":"23acb1d5ea5ebbf6fc7d1b180d9b02f7563c2c17eda4869db432ce3ad6f56246"} Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.430912 5113 scope.go:117] "RemoveContainer" containerID="16651a31482f1c0cd6fe233f3183f63d8e017c1471ce1d233cf2547ae35770a0" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.446371 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4qrsn"] Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.449305 5113 scope.go:117] "RemoveContainer" containerID="a0fbb92681c46a42102eed9649761e19acecf2f81db190c81cd551fd57a33f7b" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.450415 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4qrsn"] Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.464830 5113 scope.go:117] "RemoveContainer" containerID="898e119131ccbd53d3b08653df43ed219b33f88ba94eed34e8afda71f84a7b81" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.524403 5113 scope.go:117] "RemoveContainer" containerID="ecbd97312a89db090730ecb5c8d7d34608a6d785113daf82cfd6cad10384efcc" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.542212 5113 scope.go:117] "RemoveContainer" containerID="aeb49bc4fec2311eb34b420f5efcaca8b3f5e898cf18eff29bd8d356a441ba52" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.555690 5113 scope.go:117] "RemoveContainer" containerID="0f6cbc907d43156329193d6afe6cbbf8751a75292df13623820b8e45fe47be0b" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.570602 5113 scope.go:117] "RemoveContainer" containerID="ceba849bd0445b0fb2ea2c10866e8c04cdb72059e168a7aa59f86db129ad709b" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.584490 5113 scope.go:117] "RemoveContainer" containerID="4a3efe72e0f387f4cd7268a568aca8a828923df9591740fe92f5adc14be636a4" Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.902098 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:21:50 crc kubenswrapper[5113]: I1212 14:21:50.902280 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:21:51 crc kubenswrapper[5113]: I1212 14:21:51.422779 5113 generic.go:358] "Generic (PLEG): container finished" podID="d50d43a5-ee44-403c-bfa1-8093fec2788d" containerID="11b1b9f5075258d40b4e3043c9a5ef5380db61a819af439ac94123d274a40601" exitCode=0 Dec 12 14:21:51 crc kubenswrapper[5113]: I1212 14:21:51.422913 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" event={"ID":"d50d43a5-ee44-403c-bfa1-8093fec2788d","Type":"ContainerDied","Data":"11b1b9f5075258d40b4e3043c9a5ef5380db61a819af439ac94123d274a40601"} Dec 12 14:21:51 crc kubenswrapper[5113]: I1212 14:21:51.429296 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" event={"ID":"9ba83a93-adf6-4dcd-a173-3c359d3693f2","Type":"ContainerStarted","Data":"58930e9ade297fa6f1dddfe14750a404202e6a52ebcc887ad8c4d8f92ce312bc"} Dec 12 14:21:51 crc kubenswrapper[5113]: I1212 14:21:51.475222 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-mcmhb" podStartSLOduration=3.4752045799999998 podStartE2EDuration="3.47520458s" podCreationTimestamp="2025-12-12 14:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:21:51.473223489 +0000 UTC m=+694.308473336" watchObservedRunningTime="2025-12-12 14:21:51.47520458 +0000 UTC m=+694.310454407" Dec 12 14:21:51 crc kubenswrapper[5113]: I1212 14:21:51.491766 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74da26a1-71e9-47b4-bb18-cef44b9df055" path="/var/lib/kubelet/pods/74da26a1-71e9-47b4-bb18-cef44b9df055/volumes" Dec 12 14:21:51 crc kubenswrapper[5113]: I1212 14:21:51.492963 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7fc971e-760a-4530-b3b2-7975699b4383" path="/var/lib/kubelet/pods/e7fc971e-760a-4530-b3b2-7975699b4383/volumes" Dec 12 14:21:52 crc kubenswrapper[5113]: I1212 14:21:52.438572 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" event={"ID":"d50d43a5-ee44-403c-bfa1-8093fec2788d","Type":"ContainerStarted","Data":"d806ddae52a863c9deafe35c3105ed5a9f9aebfbf001d62159cd9d85e117f4f2"} Dec 12 14:21:52 crc kubenswrapper[5113]: I1212 14:21:52.438625 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" event={"ID":"d50d43a5-ee44-403c-bfa1-8093fec2788d","Type":"ContainerStarted","Data":"d53026484531186e810e2da4494f49cdc6f6d41a6fd01525d335acd4c2165f43"} Dec 12 14:21:52 crc kubenswrapper[5113]: I1212 14:21:52.438635 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" event={"ID":"d50d43a5-ee44-403c-bfa1-8093fec2788d","Type":"ContainerStarted","Data":"9be990eff113c6c29c128104440d5ece8d49472da2da9781ab889e868ae7b102"} Dec 12 14:21:52 crc kubenswrapper[5113]: I1212 14:21:52.438647 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" event={"ID":"d50d43a5-ee44-403c-bfa1-8093fec2788d","Type":"ContainerStarted","Data":"649c42308f9f19e3b727bb3d9c22cb7ecf22997141e819f07642784c13cd49f6"} Dec 12 14:21:53 crc kubenswrapper[5113]: I1212 14:21:53.449292 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" event={"ID":"d50d43a5-ee44-403c-bfa1-8093fec2788d","Type":"ContainerStarted","Data":"ceb54c34687ffe90b03742a401b58820ff912b4000701fd665f673742c0ec705"} Dec 12 14:21:53 crc kubenswrapper[5113]: I1212 14:21:53.449596 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" event={"ID":"d50d43a5-ee44-403c-bfa1-8093fec2788d","Type":"ContainerStarted","Data":"7271eb26f757aa168613a1e663110865947aa9c049696a03f1cddbeaabafde68"} Dec 12 14:21:58 crc kubenswrapper[5113]: I1212 14:21:58.482706 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" event={"ID":"d50d43a5-ee44-403c-bfa1-8093fec2788d","Type":"ContainerStarted","Data":"a736037c3588108c82312e345913f8ed4802a17df68619252923518d2c3544a5"} Dec 12 14:22:00 crc kubenswrapper[5113]: I1212 14:22:00.497827 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" event={"ID":"d50d43a5-ee44-403c-bfa1-8093fec2788d","Type":"ContainerStarted","Data":"93ce5a549ac412a088d58fcb44545c3a6e373ae99a2ef7d39688e024fd625ecd"} Dec 12 14:22:00 crc kubenswrapper[5113]: I1212 14:22:00.498303 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:22:00 crc kubenswrapper[5113]: I1212 14:22:00.498408 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:22:00 crc kubenswrapper[5113]: I1212 14:22:00.498488 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:22:00 crc kubenswrapper[5113]: I1212 14:22:00.527310 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:22:00 crc kubenswrapper[5113]: I1212 14:22:00.527589 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:22:00 crc kubenswrapper[5113]: I1212 14:22:00.534031 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" podStartSLOduration=11.534015408 podStartE2EDuration="11.534015408s" podCreationTimestamp="2025-12-12 14:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:22:00.531660414 +0000 UTC m=+703.366910241" watchObservedRunningTime="2025-12-12 14:22:00.534015408 +0000 UTC m=+703.369265235" Dec 12 14:22:20 crc kubenswrapper[5113]: I1212 14:22:20.902037 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:22:20 crc kubenswrapper[5113]: I1212 14:22:20.902521 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:22:20 crc kubenswrapper[5113]: I1212 14:22:20.902567 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:22:20 crc kubenswrapper[5113]: I1212 14:22:20.903148 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e968de37b04629c3ca728af2c0b097332111db0ae21e944a78534845c463d37"} pod="openshift-machine-config-operator/machine-config-daemon-5dn52" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:22:20 crc kubenswrapper[5113]: I1212 14:22:20.903203 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" containerID="cri-o://3e968de37b04629c3ca728af2c0b097332111db0ae21e944a78534845c463d37" gracePeriod=600 Dec 12 14:22:21 crc kubenswrapper[5113]: I1212 14:22:21.610430 5113 generic.go:358] "Generic (PLEG): container finished" podID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerID="3e968de37b04629c3ca728af2c0b097332111db0ae21e944a78534845c463d37" exitCode=0 Dec 12 14:22:21 crc kubenswrapper[5113]: I1212 14:22:21.610514 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerDied","Data":"3e968de37b04629c3ca728af2c0b097332111db0ae21e944a78534845c463d37"} Dec 12 14:22:21 crc kubenswrapper[5113]: I1212 14:22:21.611076 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerStarted","Data":"089ae0b96ee1d17b3fab45c8858e415f05531dedd2d6cdd703124f47bb96a0d5"} Dec 12 14:22:21 crc kubenswrapper[5113]: I1212 14:22:21.611174 5113 scope.go:117] "RemoveContainer" containerID="28d73d1a270b748c99c173d6242f40f212be4531ede388f7402b08f661b2e0f0" Dec 12 14:22:32 crc kubenswrapper[5113]: I1212 14:22:32.531529 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m66r9" Dec 12 14:23:18 crc kubenswrapper[5113]: I1212 14:23:18.519630 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mfbtg"] Dec 12 14:23:18 crc kubenswrapper[5113]: I1212 14:23:18.534485 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mfbtg"] Dec 12 14:23:18 crc kubenswrapper[5113]: I1212 14:23:18.534633 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:18 crc kubenswrapper[5113]: I1212 14:23:18.645348 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flc4b\" (UniqueName: \"kubernetes.io/projected/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-kube-api-access-flc4b\") pod \"redhat-operators-mfbtg\" (UID: \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\") " pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:18 crc kubenswrapper[5113]: I1212 14:23:18.645431 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-catalog-content\") pod \"redhat-operators-mfbtg\" (UID: \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\") " pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:18 crc kubenswrapper[5113]: I1212 14:23:18.645545 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-utilities\") pod \"redhat-operators-mfbtg\" (UID: \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\") " pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:18 crc kubenswrapper[5113]: I1212 14:23:18.746781 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-catalog-content\") pod \"redhat-operators-mfbtg\" (UID: \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\") " pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:18 crc kubenswrapper[5113]: I1212 14:23:18.747141 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-utilities\") pod \"redhat-operators-mfbtg\" (UID: \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\") " pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:18 crc kubenswrapper[5113]: I1212 14:23:18.747170 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-flc4b\" (UniqueName: \"kubernetes.io/projected/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-kube-api-access-flc4b\") pod \"redhat-operators-mfbtg\" (UID: \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\") " pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:18 crc kubenswrapper[5113]: I1212 14:23:18.747411 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-catalog-content\") pod \"redhat-operators-mfbtg\" (UID: \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\") " pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:18 crc kubenswrapper[5113]: I1212 14:23:18.747478 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-utilities\") pod \"redhat-operators-mfbtg\" (UID: \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\") " pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:18 crc kubenswrapper[5113]: I1212 14:23:18.767062 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-flc4b\" (UniqueName: \"kubernetes.io/projected/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-kube-api-access-flc4b\") pod \"redhat-operators-mfbtg\" (UID: \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\") " pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:18 crc kubenswrapper[5113]: I1212 14:23:18.852739 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:19 crc kubenswrapper[5113]: I1212 14:23:19.048946 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mfbtg"] Dec 12 14:23:19 crc kubenswrapper[5113]: I1212 14:23:19.976764 5113 generic.go:358] "Generic (PLEG): container finished" podID="e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" containerID="a4755f838dee5aeb4c7e66b70d8f4c85ce2a6989f1be1aa91945fea1a3824cd9" exitCode=0 Dec 12 14:23:19 crc kubenswrapper[5113]: I1212 14:23:19.976853 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbtg" event={"ID":"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a","Type":"ContainerDied","Data":"a4755f838dee5aeb4c7e66b70d8f4c85ce2a6989f1be1aa91945fea1a3824cd9"} Dec 12 14:23:19 crc kubenswrapper[5113]: I1212 14:23:19.976921 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbtg" event={"ID":"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a","Type":"ContainerStarted","Data":"e723b97269655958b7bd30cb478e155dade05361291464298cb4e8c7962117cf"} Dec 12 14:23:20 crc kubenswrapper[5113]: I1212 14:23:20.986183 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbtg" event={"ID":"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a","Type":"ContainerStarted","Data":"d28a160735ae2f1f814b2f78cfecd321ad07b75edc0efdceed638bc7588b6f1d"} Dec 12 14:23:21 crc kubenswrapper[5113]: I1212 14:23:21.995653 5113 generic.go:358] "Generic (PLEG): container finished" podID="e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" containerID="d28a160735ae2f1f814b2f78cfecd321ad07b75edc0efdceed638bc7588b6f1d" exitCode=0 Dec 12 14:23:21 crc kubenswrapper[5113]: I1212 14:23:21.995729 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbtg" event={"ID":"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a","Type":"ContainerDied","Data":"d28a160735ae2f1f814b2f78cfecd321ad07b75edc0efdceed638bc7588b6f1d"} Dec 12 14:23:23 crc kubenswrapper[5113]: I1212 14:23:23.004229 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbtg" event={"ID":"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a","Type":"ContainerStarted","Data":"c1f725719cc0c8c49c736eb9db9cc328c30a33473da792903bd90d074263d8af"} Dec 12 14:23:23 crc kubenswrapper[5113]: I1212 14:23:23.028590 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mfbtg" podStartSLOduration=4.330077486 podStartE2EDuration="5.028561878s" podCreationTimestamp="2025-12-12 14:23:18 +0000 UTC" firstStartedPulling="2025-12-12 14:23:19.977504555 +0000 UTC m=+782.812754392" lastFinishedPulling="2025-12-12 14:23:20.675988957 +0000 UTC m=+783.511238784" observedRunningTime="2025-12-12 14:23:23.02169228 +0000 UTC m=+785.856942137" watchObservedRunningTime="2025-12-12 14:23:23.028561878 +0000 UTC m=+785.863811725" Dec 12 14:23:28 crc kubenswrapper[5113]: I1212 14:23:28.853334 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:28 crc kubenswrapper[5113]: I1212 14:23:28.856060 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:28 crc kubenswrapper[5113]: I1212 14:23:28.906845 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:29 crc kubenswrapper[5113]: I1212 14:23:29.091016 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:29 crc kubenswrapper[5113]: I1212 14:23:29.143269 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mfbtg"] Dec 12 14:23:31 crc kubenswrapper[5113]: I1212 14:23:31.059340 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mfbtg" podUID="e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" containerName="registry-server" containerID="cri-o://c1f725719cc0c8c49c736eb9db9cc328c30a33473da792903bd90d074263d8af" gracePeriod=2 Dec 12 14:23:31 crc kubenswrapper[5113]: I1212 14:23:31.993396 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.048202 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-catalog-content\") pod \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\" (UID: \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\") " Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.048272 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-utilities\") pod \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\" (UID: \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\") " Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.048394 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flc4b\" (UniqueName: \"kubernetes.io/projected/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-kube-api-access-flc4b\") pod \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\" (UID: \"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a\") " Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.049323 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-utilities" (OuterVolumeSpecName: "utilities") pod "e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" (UID: "e4e38145-12f1-45dc-b0e1-f8a47ca9b66a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.055006 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-kube-api-access-flc4b" (OuterVolumeSpecName: "kube-api-access-flc4b") pod "e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" (UID: "e4e38145-12f1-45dc-b0e1-f8a47ca9b66a"). InnerVolumeSpecName "kube-api-access-flc4b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.067318 5113 generic.go:358] "Generic (PLEG): container finished" podID="e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" containerID="c1f725719cc0c8c49c736eb9db9cc328c30a33473da792903bd90d074263d8af" exitCode=0 Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.067402 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbtg" event={"ID":"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a","Type":"ContainerDied","Data":"c1f725719cc0c8c49c736eb9db9cc328c30a33473da792903bd90d074263d8af"} Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.067447 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbtg" event={"ID":"e4e38145-12f1-45dc-b0e1-f8a47ca9b66a","Type":"ContainerDied","Data":"e723b97269655958b7bd30cb478e155dade05361291464298cb4e8c7962117cf"} Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.067485 5113 scope.go:117] "RemoveContainer" containerID="c1f725719cc0c8c49c736eb9db9cc328c30a33473da792903bd90d074263d8af" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.067537 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfbtg" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.082291 5113 scope.go:117] "RemoveContainer" containerID="d28a160735ae2f1f814b2f78cfecd321ad07b75edc0efdceed638bc7588b6f1d" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.099245 5113 scope.go:117] "RemoveContainer" containerID="a4755f838dee5aeb4c7e66b70d8f4c85ce2a6989f1be1aa91945fea1a3824cd9" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.116778 5113 scope.go:117] "RemoveContainer" containerID="c1f725719cc0c8c49c736eb9db9cc328c30a33473da792903bd90d074263d8af" Dec 12 14:23:32 crc kubenswrapper[5113]: E1212 14:23:32.117265 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1f725719cc0c8c49c736eb9db9cc328c30a33473da792903bd90d074263d8af\": container with ID starting with c1f725719cc0c8c49c736eb9db9cc328c30a33473da792903bd90d074263d8af not found: ID does not exist" containerID="c1f725719cc0c8c49c736eb9db9cc328c30a33473da792903bd90d074263d8af" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.117326 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1f725719cc0c8c49c736eb9db9cc328c30a33473da792903bd90d074263d8af"} err="failed to get container status \"c1f725719cc0c8c49c736eb9db9cc328c30a33473da792903bd90d074263d8af\": rpc error: code = NotFound desc = could not find container \"c1f725719cc0c8c49c736eb9db9cc328c30a33473da792903bd90d074263d8af\": container with ID starting with c1f725719cc0c8c49c736eb9db9cc328c30a33473da792903bd90d074263d8af not found: ID does not exist" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.117361 5113 scope.go:117] "RemoveContainer" containerID="d28a160735ae2f1f814b2f78cfecd321ad07b75edc0efdceed638bc7588b6f1d" Dec 12 14:23:32 crc kubenswrapper[5113]: E1212 14:23:32.117658 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d28a160735ae2f1f814b2f78cfecd321ad07b75edc0efdceed638bc7588b6f1d\": container with ID starting with d28a160735ae2f1f814b2f78cfecd321ad07b75edc0efdceed638bc7588b6f1d not found: ID does not exist" containerID="d28a160735ae2f1f814b2f78cfecd321ad07b75edc0efdceed638bc7588b6f1d" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.117737 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d28a160735ae2f1f814b2f78cfecd321ad07b75edc0efdceed638bc7588b6f1d"} err="failed to get container status \"d28a160735ae2f1f814b2f78cfecd321ad07b75edc0efdceed638bc7588b6f1d\": rpc error: code = NotFound desc = could not find container \"d28a160735ae2f1f814b2f78cfecd321ad07b75edc0efdceed638bc7588b6f1d\": container with ID starting with d28a160735ae2f1f814b2f78cfecd321ad07b75edc0efdceed638bc7588b6f1d not found: ID does not exist" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.117781 5113 scope.go:117] "RemoveContainer" containerID="a4755f838dee5aeb4c7e66b70d8f4c85ce2a6989f1be1aa91945fea1a3824cd9" Dec 12 14:23:32 crc kubenswrapper[5113]: E1212 14:23:32.118155 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4755f838dee5aeb4c7e66b70d8f4c85ce2a6989f1be1aa91945fea1a3824cd9\": container with ID starting with a4755f838dee5aeb4c7e66b70d8f4c85ce2a6989f1be1aa91945fea1a3824cd9 not found: ID does not exist" containerID="a4755f838dee5aeb4c7e66b70d8f4c85ce2a6989f1be1aa91945fea1a3824cd9" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.118293 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4755f838dee5aeb4c7e66b70d8f4c85ce2a6989f1be1aa91945fea1a3824cd9"} err="failed to get container status \"a4755f838dee5aeb4c7e66b70d8f4c85ce2a6989f1be1aa91945fea1a3824cd9\": rpc error: code = NotFound desc = could not find container \"a4755f838dee5aeb4c7e66b70d8f4c85ce2a6989f1be1aa91945fea1a3824cd9\": container with ID starting with a4755f838dee5aeb4c7e66b70d8f4c85ce2a6989f1be1aa91945fea1a3824cd9 not found: ID does not exist" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.148765 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" (UID: "e4e38145-12f1-45dc-b0e1-f8a47ca9b66a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.149520 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-flc4b\" (UniqueName: \"kubernetes.io/projected/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-kube-api-access-flc4b\") on node \"crc\" DevicePath \"\"" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.149562 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.149574 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.411585 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mfbtg"] Dec 12 14:23:32 crc kubenswrapper[5113]: I1212 14:23:32.415707 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mfbtg"] Dec 12 14:23:33 crc kubenswrapper[5113]: I1212 14:23:33.495090 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" path="/var/lib/kubelet/pods/e4e38145-12f1-45dc-b0e1-f8a47ca9b66a/volumes" Dec 12 14:24:01 crc kubenswrapper[5113]: I1212 14:24:01.290620 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kg5v"] Dec 12 14:24:01 crc kubenswrapper[5113]: I1212 14:24:01.291689 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7kg5v" podUID="b4486b4b-7cef-4488-8b89-1b86e4d0621f" containerName="registry-server" containerID="cri-o://162986bb6030b61ecaf086247f35270c22113b81340270df1953a9db877f09d9" gracePeriod=30 Dec 12 14:24:01 crc kubenswrapper[5113]: I1212 14:24:01.703553 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:24:01 crc kubenswrapper[5113]: I1212 14:24:01.769587 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4486b4b-7cef-4488-8b89-1b86e4d0621f-utilities\") pod \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\" (UID: \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\") " Dec 12 14:24:01 crc kubenswrapper[5113]: I1212 14:24:01.769646 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pgm6\" (UniqueName: \"kubernetes.io/projected/b4486b4b-7cef-4488-8b89-1b86e4d0621f-kube-api-access-2pgm6\") pod \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\" (UID: \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\") " Dec 12 14:24:01 crc kubenswrapper[5113]: I1212 14:24:01.769703 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4486b4b-7cef-4488-8b89-1b86e4d0621f-catalog-content\") pod \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\" (UID: \"b4486b4b-7cef-4488-8b89-1b86e4d0621f\") " Dec 12 14:24:01 crc kubenswrapper[5113]: I1212 14:24:01.771404 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4486b4b-7cef-4488-8b89-1b86e4d0621f-utilities" (OuterVolumeSpecName: "utilities") pod "b4486b4b-7cef-4488-8b89-1b86e4d0621f" (UID: "b4486b4b-7cef-4488-8b89-1b86e4d0621f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:01 crc kubenswrapper[5113]: I1212 14:24:01.775646 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4486b4b-7cef-4488-8b89-1b86e4d0621f-kube-api-access-2pgm6" (OuterVolumeSpecName: "kube-api-access-2pgm6") pod "b4486b4b-7cef-4488-8b89-1b86e4d0621f" (UID: "b4486b4b-7cef-4488-8b89-1b86e4d0621f"). InnerVolumeSpecName "kube-api-access-2pgm6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:24:01 crc kubenswrapper[5113]: I1212 14:24:01.784380 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4486b4b-7cef-4488-8b89-1b86e4d0621f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4486b4b-7cef-4488-8b89-1b86e4d0621f" (UID: "b4486b4b-7cef-4488-8b89-1b86e4d0621f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:01 crc kubenswrapper[5113]: I1212 14:24:01.871401 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4486b4b-7cef-4488-8b89-1b86e4d0621f-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:01 crc kubenswrapper[5113]: I1212 14:24:01.871460 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2pgm6\" (UniqueName: \"kubernetes.io/projected/b4486b4b-7cef-4488-8b89-1b86e4d0621f-kube-api-access-2pgm6\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:01 crc kubenswrapper[5113]: I1212 14:24:01.871479 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4486b4b-7cef-4488-8b89-1b86e4d0621f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.279070 5113 generic.go:358] "Generic (PLEG): container finished" podID="b4486b4b-7cef-4488-8b89-1b86e4d0621f" containerID="162986bb6030b61ecaf086247f35270c22113b81340270df1953a9db877f09d9" exitCode=0 Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.279167 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kg5v" event={"ID":"b4486b4b-7cef-4488-8b89-1b86e4d0621f","Type":"ContainerDied","Data":"162986bb6030b61ecaf086247f35270c22113b81340270df1953a9db877f09d9"} Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.279210 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kg5v" Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.279236 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kg5v" event={"ID":"b4486b4b-7cef-4488-8b89-1b86e4d0621f","Type":"ContainerDied","Data":"5749b9ce0cc22cc1110315516c2a8cd09cb3d0c124b77bbd9c420880dc2d7076"} Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.279280 5113 scope.go:117] "RemoveContainer" containerID="162986bb6030b61ecaf086247f35270c22113b81340270df1953a9db877f09d9" Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.297833 5113 scope.go:117] "RemoveContainer" containerID="bf11e616071fc4ea4939fe109e2b9616f584f87cccd987da86f847e02c71d6ab" Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.308007 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kg5v"] Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.315999 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kg5v"] Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.317321 5113 scope.go:117] "RemoveContainer" containerID="076b4fe1549320dd0c91e686ded5555d0985212f2fdbfb06374b2b4245f277a9" Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.332013 5113 scope.go:117] "RemoveContainer" containerID="162986bb6030b61ecaf086247f35270c22113b81340270df1953a9db877f09d9" Dec 12 14:24:02 crc kubenswrapper[5113]: E1212 14:24:02.332644 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"162986bb6030b61ecaf086247f35270c22113b81340270df1953a9db877f09d9\": container with ID starting with 162986bb6030b61ecaf086247f35270c22113b81340270df1953a9db877f09d9 not found: ID does not exist" containerID="162986bb6030b61ecaf086247f35270c22113b81340270df1953a9db877f09d9" Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.332677 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"162986bb6030b61ecaf086247f35270c22113b81340270df1953a9db877f09d9"} err="failed to get container status \"162986bb6030b61ecaf086247f35270c22113b81340270df1953a9db877f09d9\": rpc error: code = NotFound desc = could not find container \"162986bb6030b61ecaf086247f35270c22113b81340270df1953a9db877f09d9\": container with ID starting with 162986bb6030b61ecaf086247f35270c22113b81340270df1953a9db877f09d9 not found: ID does not exist" Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.332696 5113 scope.go:117] "RemoveContainer" containerID="bf11e616071fc4ea4939fe109e2b9616f584f87cccd987da86f847e02c71d6ab" Dec 12 14:24:02 crc kubenswrapper[5113]: E1212 14:24:02.332932 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf11e616071fc4ea4939fe109e2b9616f584f87cccd987da86f847e02c71d6ab\": container with ID starting with bf11e616071fc4ea4939fe109e2b9616f584f87cccd987da86f847e02c71d6ab not found: ID does not exist" containerID="bf11e616071fc4ea4939fe109e2b9616f584f87cccd987da86f847e02c71d6ab" Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.332958 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf11e616071fc4ea4939fe109e2b9616f584f87cccd987da86f847e02c71d6ab"} err="failed to get container status \"bf11e616071fc4ea4939fe109e2b9616f584f87cccd987da86f847e02c71d6ab\": rpc error: code = NotFound desc = could not find container \"bf11e616071fc4ea4939fe109e2b9616f584f87cccd987da86f847e02c71d6ab\": container with ID starting with bf11e616071fc4ea4939fe109e2b9616f584f87cccd987da86f847e02c71d6ab not found: ID does not exist" Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.332973 5113 scope.go:117] "RemoveContainer" containerID="076b4fe1549320dd0c91e686ded5555d0985212f2fdbfb06374b2b4245f277a9" Dec 12 14:24:02 crc kubenswrapper[5113]: E1212 14:24:02.333224 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"076b4fe1549320dd0c91e686ded5555d0985212f2fdbfb06374b2b4245f277a9\": container with ID starting with 076b4fe1549320dd0c91e686ded5555d0985212f2fdbfb06374b2b4245f277a9 not found: ID does not exist" containerID="076b4fe1549320dd0c91e686ded5555d0985212f2fdbfb06374b2b4245f277a9" Dec 12 14:24:02 crc kubenswrapper[5113]: I1212 14:24:02.333249 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"076b4fe1549320dd0c91e686ded5555d0985212f2fdbfb06374b2b4245f277a9"} err="failed to get container status \"076b4fe1549320dd0c91e686ded5555d0985212f2fdbfb06374b2b4245f277a9\": rpc error: code = NotFound desc = could not find container \"076b4fe1549320dd0c91e686ded5555d0985212f2fdbfb06374b2b4245f277a9\": container with ID starting with 076b4fe1549320dd0c91e686ded5555d0985212f2fdbfb06374b2b4245f277a9 not found: ID does not exist" Dec 12 14:24:03 crc kubenswrapper[5113]: I1212 14:24:03.490840 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4486b4b-7cef-4488-8b89-1b86e4d0621f" path="/var/lib/kubelet/pods/b4486b4b-7cef-4488-8b89-1b86e4d0621f/volumes" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500017 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cqb6j"] Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500602 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" containerName="extract-utilities" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500615 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" containerName="extract-utilities" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500629 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b4486b4b-7cef-4488-8b89-1b86e4d0621f" containerName="extract-utilities" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500635 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4486b4b-7cef-4488-8b89-1b86e4d0621f" containerName="extract-utilities" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500644 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b4486b4b-7cef-4488-8b89-1b86e4d0621f" containerName="registry-server" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500651 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4486b4b-7cef-4488-8b89-1b86e4d0621f" containerName="registry-server" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500660 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b4486b4b-7cef-4488-8b89-1b86e4d0621f" containerName="extract-content" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500665 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4486b4b-7cef-4488-8b89-1b86e4d0621f" containerName="extract-content" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500676 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" containerName="extract-content" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500682 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" containerName="extract-content" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500695 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" containerName="registry-server" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500700 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" containerName="registry-server" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500784 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="b4486b4b-7cef-4488-8b89-1b86e4d0621f" containerName="registry-server" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.500892 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="e4e38145-12f1-45dc-b0e1-f8a47ca9b66a" containerName="registry-server" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.516664 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.519743 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cqb6j"] Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.616444 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b63f33e0-a188-4a65-a400-896ba010a800-utilities\") pod \"community-operators-cqb6j\" (UID: \"b63f33e0-a188-4a65-a400-896ba010a800\") " pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.616764 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b63f33e0-a188-4a65-a400-896ba010a800-catalog-content\") pod \"community-operators-cqb6j\" (UID: \"b63f33e0-a188-4a65-a400-896ba010a800\") " pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.616913 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qx4g\" (UniqueName: \"kubernetes.io/projected/b63f33e0-a188-4a65-a400-896ba010a800-kube-api-access-4qx4g\") pod \"community-operators-cqb6j\" (UID: \"b63f33e0-a188-4a65-a400-896ba010a800\") " pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.718186 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b63f33e0-a188-4a65-a400-896ba010a800-utilities\") pod \"community-operators-cqb6j\" (UID: \"b63f33e0-a188-4a65-a400-896ba010a800\") " pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.718489 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b63f33e0-a188-4a65-a400-896ba010a800-catalog-content\") pod \"community-operators-cqb6j\" (UID: \"b63f33e0-a188-4a65-a400-896ba010a800\") " pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.718635 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4qx4g\" (UniqueName: \"kubernetes.io/projected/b63f33e0-a188-4a65-a400-896ba010a800-kube-api-access-4qx4g\") pod \"community-operators-cqb6j\" (UID: \"b63f33e0-a188-4a65-a400-896ba010a800\") " pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.718886 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b63f33e0-a188-4a65-a400-896ba010a800-utilities\") pod \"community-operators-cqb6j\" (UID: \"b63f33e0-a188-4a65-a400-896ba010a800\") " pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.718932 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b63f33e0-a188-4a65-a400-896ba010a800-catalog-content\") pod \"community-operators-cqb6j\" (UID: \"b63f33e0-a188-4a65-a400-896ba010a800\") " pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.742316 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh"] Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.745587 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qx4g\" (UniqueName: \"kubernetes.io/projected/b63f33e0-a188-4a65-a400-896ba010a800-kube-api-access-4qx4g\") pod \"community-operators-cqb6j\" (UID: \"b63f33e0-a188-4a65-a400-896ba010a800\") " pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.753699 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.755669 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh"] Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.757015 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.820090 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36ed7b50-917f-4686-8189-b90a0e4bb5c6-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh\" (UID: \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.820167 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36ed7b50-917f-4686-8189-b90a0e4bb5c6-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh\" (UID: \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.820259 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfdhk\" (UniqueName: \"kubernetes.io/projected/36ed7b50-917f-4686-8189-b90a0e4bb5c6-kube-api-access-hfdhk\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh\" (UID: \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.831866 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.921869 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36ed7b50-917f-4686-8189-b90a0e4bb5c6-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh\" (UID: \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.922422 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36ed7b50-917f-4686-8189-b90a0e4bb5c6-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh\" (UID: \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.922553 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36ed7b50-917f-4686-8189-b90a0e4bb5c6-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh\" (UID: \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.922671 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hfdhk\" (UniqueName: \"kubernetes.io/projected/36ed7b50-917f-4686-8189-b90a0e4bb5c6-kube-api-access-hfdhk\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh\" (UID: \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.922770 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36ed7b50-917f-4686-8189-b90a0e4bb5c6-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh\" (UID: \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" Dec 12 14:24:05 crc kubenswrapper[5113]: I1212 14:24:05.946931 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfdhk\" (UniqueName: \"kubernetes.io/projected/36ed7b50-917f-4686-8189-b90a0e4bb5c6-kube-api-access-hfdhk\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh\" (UID: \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" Dec 12 14:24:06 crc kubenswrapper[5113]: I1212 14:24:06.087887 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" Dec 12 14:24:06 crc kubenswrapper[5113]: I1212 14:24:06.158347 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cqb6j"] Dec 12 14:24:06 crc kubenswrapper[5113]: I1212 14:24:06.306099 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqb6j" event={"ID":"b63f33e0-a188-4a65-a400-896ba010a800","Type":"ContainerStarted","Data":"38080a80352ba3c9710c08aa4cbf20eebae5c7d90984ea25bd807cd067c8cd3d"} Dec 12 14:24:06 crc kubenswrapper[5113]: I1212 14:24:06.306423 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqb6j" event={"ID":"b63f33e0-a188-4a65-a400-896ba010a800","Type":"ContainerStarted","Data":"b58d8e28da7c6bf02bfc9b8fe7fd5adbee37e8de63b898b530872872b0921a6c"} Dec 12 14:24:06 crc kubenswrapper[5113]: I1212 14:24:06.501994 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh"] Dec 12 14:24:06 crc kubenswrapper[5113]: W1212 14:24:06.510327 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36ed7b50_917f_4686_8189_b90a0e4bb5c6.slice/crio-a7da3cbd438dc45aa605e9609e508c57f7be5af5c64204766ab713148aff959a WatchSource:0}: Error finding container a7da3cbd438dc45aa605e9609e508c57f7be5af5c64204766ab713148aff959a: Status 404 returned error can't find the container with id a7da3cbd438dc45aa605e9609e508c57f7be5af5c64204766ab713148aff959a Dec 12 14:24:07 crc kubenswrapper[5113]: I1212 14:24:07.314250 5113 generic.go:358] "Generic (PLEG): container finished" podID="36ed7b50-917f-4686-8189-b90a0e4bb5c6" containerID="fc2008f5e46dd0a72ca4126491a2106147081162332fbf4d55905a6504f0a38b" exitCode=0 Dec 12 14:24:07 crc kubenswrapper[5113]: I1212 14:24:07.314323 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" event={"ID":"36ed7b50-917f-4686-8189-b90a0e4bb5c6","Type":"ContainerDied","Data":"fc2008f5e46dd0a72ca4126491a2106147081162332fbf4d55905a6504f0a38b"} Dec 12 14:24:07 crc kubenswrapper[5113]: I1212 14:24:07.314392 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" event={"ID":"36ed7b50-917f-4686-8189-b90a0e4bb5c6","Type":"ContainerStarted","Data":"a7da3cbd438dc45aa605e9609e508c57f7be5af5c64204766ab713148aff959a"} Dec 12 14:24:07 crc kubenswrapper[5113]: I1212 14:24:07.316384 5113 generic.go:358] "Generic (PLEG): container finished" podID="b63f33e0-a188-4a65-a400-896ba010a800" containerID="38080a80352ba3c9710c08aa4cbf20eebae5c7d90984ea25bd807cd067c8cd3d" exitCode=0 Dec 12 14:24:07 crc kubenswrapper[5113]: I1212 14:24:07.316498 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqb6j" event={"ID":"b63f33e0-a188-4a65-a400-896ba010a800","Type":"ContainerDied","Data":"38080a80352ba3c9710c08aa4cbf20eebae5c7d90984ea25bd807cd067c8cd3d"} Dec 12 14:24:08 crc kubenswrapper[5113]: I1212 14:24:08.323430 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqb6j" event={"ID":"b63f33e0-a188-4a65-a400-896ba010a800","Type":"ContainerStarted","Data":"8ee9178ba16c65756e7673942f387b6fe83b2bf519c1b841f9640440712f95b9"} Dec 12 14:24:09 crc kubenswrapper[5113]: I1212 14:24:09.330827 5113 generic.go:358] "Generic (PLEG): container finished" podID="b63f33e0-a188-4a65-a400-896ba010a800" containerID="8ee9178ba16c65756e7673942f387b6fe83b2bf519c1b841f9640440712f95b9" exitCode=0 Dec 12 14:24:09 crc kubenswrapper[5113]: I1212 14:24:09.330940 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqb6j" event={"ID":"b63f33e0-a188-4a65-a400-896ba010a800","Type":"ContainerDied","Data":"8ee9178ba16c65756e7673942f387b6fe83b2bf519c1b841f9640440712f95b9"} Dec 12 14:24:09 crc kubenswrapper[5113]: I1212 14:24:09.335598 5113 generic.go:358] "Generic (PLEG): container finished" podID="36ed7b50-917f-4686-8189-b90a0e4bb5c6" containerID="8c551d7164f0b016e3c4d3e0f420917bc2cba10240fa1f7c167819a9df54d87f" exitCode=0 Dec 12 14:24:09 crc kubenswrapper[5113]: I1212 14:24:09.335713 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" event={"ID":"36ed7b50-917f-4686-8189-b90a0e4bb5c6","Type":"ContainerDied","Data":"8c551d7164f0b016e3c4d3e0f420917bc2cba10240fa1f7c167819a9df54d87f"} Dec 12 14:24:10 crc kubenswrapper[5113]: I1212 14:24:10.344915 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqb6j" event={"ID":"b63f33e0-a188-4a65-a400-896ba010a800","Type":"ContainerStarted","Data":"7d1c640664aee25e023907f3304fe3af4274e396a6756e3509d98eda3cf19b85"} Dec 12 14:24:10 crc kubenswrapper[5113]: I1212 14:24:10.347757 5113 generic.go:358] "Generic (PLEG): container finished" podID="36ed7b50-917f-4686-8189-b90a0e4bb5c6" containerID="64fda64b212d03cdb3f79b1234cd4d3776bd998c0b1eb9262e5225a3fe25948a" exitCode=0 Dec 12 14:24:10 crc kubenswrapper[5113]: I1212 14:24:10.347826 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" event={"ID":"36ed7b50-917f-4686-8189-b90a0e4bb5c6","Type":"ContainerDied","Data":"64fda64b212d03cdb3f79b1234cd4d3776bd998c0b1eb9262e5225a3fe25948a"} Dec 12 14:24:10 crc kubenswrapper[5113]: I1212 14:24:10.378963 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cqb6j" podStartSLOduration=4.635658005 podStartE2EDuration="5.37893346s" podCreationTimestamp="2025-12-12 14:24:05 +0000 UTC" firstStartedPulling="2025-12-12 14:24:07.317360021 +0000 UTC m=+830.152609848" lastFinishedPulling="2025-12-12 14:24:08.060635466 +0000 UTC m=+830.895885303" observedRunningTime="2025-12-12 14:24:10.365182984 +0000 UTC m=+833.200432901" watchObservedRunningTime="2025-12-12 14:24:10.37893346 +0000 UTC m=+833.214183327" Dec 12 14:24:11 crc kubenswrapper[5113]: I1212 14:24:11.605061 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" Dec 12 14:24:11 crc kubenswrapper[5113]: I1212 14:24:11.701330 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36ed7b50-917f-4686-8189-b90a0e4bb5c6-util\") pod \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\" (UID: \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\") " Dec 12 14:24:11 crc kubenswrapper[5113]: I1212 14:24:11.701445 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfdhk\" (UniqueName: \"kubernetes.io/projected/36ed7b50-917f-4686-8189-b90a0e4bb5c6-kube-api-access-hfdhk\") pod \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\" (UID: \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\") " Dec 12 14:24:11 crc kubenswrapper[5113]: I1212 14:24:11.701538 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36ed7b50-917f-4686-8189-b90a0e4bb5c6-bundle\") pod \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\" (UID: \"36ed7b50-917f-4686-8189-b90a0e4bb5c6\") " Dec 12 14:24:11 crc kubenswrapper[5113]: I1212 14:24:11.703985 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36ed7b50-917f-4686-8189-b90a0e4bb5c6-bundle" (OuterVolumeSpecName: "bundle") pod "36ed7b50-917f-4686-8189-b90a0e4bb5c6" (UID: "36ed7b50-917f-4686-8189-b90a0e4bb5c6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:11 crc kubenswrapper[5113]: I1212 14:24:11.708834 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ed7b50-917f-4686-8189-b90a0e4bb5c6-kube-api-access-hfdhk" (OuterVolumeSpecName: "kube-api-access-hfdhk") pod "36ed7b50-917f-4686-8189-b90a0e4bb5c6" (UID: "36ed7b50-917f-4686-8189-b90a0e4bb5c6"). InnerVolumeSpecName "kube-api-access-hfdhk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:24:11 crc kubenswrapper[5113]: I1212 14:24:11.715387 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36ed7b50-917f-4686-8189-b90a0e4bb5c6-util" (OuterVolumeSpecName: "util") pod "36ed7b50-917f-4686-8189-b90a0e4bb5c6" (UID: "36ed7b50-917f-4686-8189-b90a0e4bb5c6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:11 crc kubenswrapper[5113]: I1212 14:24:11.803232 5113 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36ed7b50-917f-4686-8189-b90a0e4bb5c6-util\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:11 crc kubenswrapper[5113]: I1212 14:24:11.803277 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hfdhk\" (UniqueName: \"kubernetes.io/projected/36ed7b50-917f-4686-8189-b90a0e4bb5c6-kube-api-access-hfdhk\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:11 crc kubenswrapper[5113]: I1212 14:24:11.803289 5113 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36ed7b50-917f-4686-8189-b90a0e4bb5c6-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:12 crc kubenswrapper[5113]: I1212 14:24:12.367634 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" event={"ID":"36ed7b50-917f-4686-8189-b90a0e4bb5c6","Type":"ContainerDied","Data":"a7da3cbd438dc45aa605e9609e508c57f7be5af5c64204766ab713148aff959a"} Dec 12 14:24:12 crc kubenswrapper[5113]: I1212 14:24:12.367690 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7da3cbd438dc45aa605e9609e508c57f7be5af5c64204766ab713148aff959a" Dec 12 14:24:12 crc kubenswrapper[5113]: I1212 14:24:12.367780 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ksbdh" Dec 12 14:24:14 crc kubenswrapper[5113]: I1212 14:24:14.350235 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w"] Dec 12 14:24:14 crc kubenswrapper[5113]: I1212 14:24:14.351569 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36ed7b50-917f-4686-8189-b90a0e4bb5c6" containerName="extract" Dec 12 14:24:14 crc kubenswrapper[5113]: I1212 14:24:14.351598 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ed7b50-917f-4686-8189-b90a0e4bb5c6" containerName="extract" Dec 12 14:24:14 crc kubenswrapper[5113]: I1212 14:24:14.351620 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36ed7b50-917f-4686-8189-b90a0e4bb5c6" containerName="pull" Dec 12 14:24:14 crc kubenswrapper[5113]: I1212 14:24:14.351661 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ed7b50-917f-4686-8189-b90a0e4bb5c6" containerName="pull" Dec 12 14:24:14 crc kubenswrapper[5113]: I1212 14:24:14.351710 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36ed7b50-917f-4686-8189-b90a0e4bb5c6" containerName="util" Dec 12 14:24:14 crc kubenswrapper[5113]: I1212 14:24:14.351723 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ed7b50-917f-4686-8189-b90a0e4bb5c6" containerName="util" Dec 12 14:24:14 crc kubenswrapper[5113]: I1212 14:24:14.351945 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="36ed7b50-917f-4686-8189-b90a0e4bb5c6" containerName="extract" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.608173 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.610855 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.629907 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w"] Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.761240 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9072aa3b-8694-444c-b9f1-087bcfe245e0-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w\" (UID: \"9072aa3b-8694-444c-b9f1-087bcfe245e0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.761837 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6mmn\" (UniqueName: \"kubernetes.io/projected/9072aa3b-8694-444c-b9f1-087bcfe245e0-kube-api-access-v6mmn\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w\" (UID: \"9072aa3b-8694-444c-b9f1-087bcfe245e0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.761888 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9072aa3b-8694-444c-b9f1-087bcfe245e0-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w\" (UID: \"9072aa3b-8694-444c-b9f1-087bcfe245e0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.833064 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.833618 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.862956 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9072aa3b-8694-444c-b9f1-087bcfe245e0-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w\" (UID: \"9072aa3b-8694-444c-b9f1-087bcfe245e0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.863064 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v6mmn\" (UniqueName: \"kubernetes.io/projected/9072aa3b-8694-444c-b9f1-087bcfe245e0-kube-api-access-v6mmn\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w\" (UID: \"9072aa3b-8694-444c-b9f1-087bcfe245e0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.863103 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9072aa3b-8694-444c-b9f1-087bcfe245e0-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w\" (UID: \"9072aa3b-8694-444c-b9f1-087bcfe245e0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.863777 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9072aa3b-8694-444c-b9f1-087bcfe245e0-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w\" (UID: \"9072aa3b-8694-444c-b9f1-087bcfe245e0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.863855 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9072aa3b-8694-444c-b9f1-087bcfe245e0-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w\" (UID: \"9072aa3b-8694-444c-b9f1-087bcfe245e0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.902622 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6mmn\" (UniqueName: \"kubernetes.io/projected/9072aa3b-8694-444c-b9f1-087bcfe245e0-kube-api-access-v6mmn\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w\" (UID: \"9072aa3b-8694-444c-b9f1-087bcfe245e0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.934532 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" Dec 12 14:24:15 crc kubenswrapper[5113]: I1212 14:24:15.946704 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:16 crc kubenswrapper[5113]: I1212 14:24:16.290366 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w"] Dec 12 14:24:16 crc kubenswrapper[5113]: I1212 14:24:16.413467 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" event={"ID":"9072aa3b-8694-444c-b9f1-087bcfe245e0","Type":"ContainerStarted","Data":"e4b71c2ad90cd47e29e7320b727254a21b4b06cb689c783a32c29422d446ed9f"} Dec 12 14:24:17 crc kubenswrapper[5113]: I1212 14:24:17.151559 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:18 crc kubenswrapper[5113]: I1212 14:24:18.444855 5113 generic.go:358] "Generic (PLEG): container finished" podID="9072aa3b-8694-444c-b9f1-087bcfe245e0" containerID="4165a36ebdb6ab01da9a7ad0e3884ba766794ea63010c0363ad51f7a96a8d0ff" exitCode=0 Dec 12 14:24:18 crc kubenswrapper[5113]: I1212 14:24:18.444913 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" event={"ID":"9072aa3b-8694-444c-b9f1-087bcfe245e0","Type":"ContainerDied","Data":"4165a36ebdb6ab01da9a7ad0e3884ba766794ea63010c0363ad51f7a96a8d0ff"} Dec 12 14:24:18 crc kubenswrapper[5113]: I1212 14:24:18.899999 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r8t77"] Dec 12 14:24:19 crc kubenswrapper[5113]: I1212 14:24:19.194598 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r8t77"] Dec 12 14:24:19 crc kubenswrapper[5113]: I1212 14:24:19.194746 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:19 crc kubenswrapper[5113]: I1212 14:24:19.321527 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-catalog-content\") pod \"certified-operators-r8t77\" (UID: \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\") " pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:19 crc kubenswrapper[5113]: I1212 14:24:19.321818 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxxk8\" (UniqueName: \"kubernetes.io/projected/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-kube-api-access-xxxk8\") pod \"certified-operators-r8t77\" (UID: \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\") " pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:19 crc kubenswrapper[5113]: I1212 14:24:19.321958 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-utilities\") pod \"certified-operators-r8t77\" (UID: \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\") " pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:19 crc kubenswrapper[5113]: I1212 14:24:19.422633 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-utilities\") pod \"certified-operators-r8t77\" (UID: \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\") " pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:19 crc kubenswrapper[5113]: I1212 14:24:19.422673 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-catalog-content\") pod \"certified-operators-r8t77\" (UID: \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\") " pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:19 crc kubenswrapper[5113]: I1212 14:24:19.422706 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xxxk8\" (UniqueName: \"kubernetes.io/projected/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-kube-api-access-xxxk8\") pod \"certified-operators-r8t77\" (UID: \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\") " pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:19 crc kubenswrapper[5113]: I1212 14:24:19.423521 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-utilities\") pod \"certified-operators-r8t77\" (UID: \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\") " pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:19 crc kubenswrapper[5113]: I1212 14:24:19.423665 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-catalog-content\") pod \"certified-operators-r8t77\" (UID: \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\") " pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:19 crc kubenswrapper[5113]: I1212 14:24:19.505053 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxxk8\" (UniqueName: \"kubernetes.io/projected/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-kube-api-access-xxxk8\") pod \"certified-operators-r8t77\" (UID: \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\") " pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:19 crc kubenswrapper[5113]: I1212 14:24:19.517578 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.120163 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r8t77"] Dec 12 14:24:20 crc kubenswrapper[5113]: W1212 14:24:20.124713 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf5f15d1_7e48_4cd4_96a3_ac5f8b725d72.slice/crio-93a7af8791b638c34c71dfb294199317b60bbb44fb4b4b1bafe4491935f904c8 WatchSource:0}: Error finding container 93a7af8791b638c34c71dfb294199317b60bbb44fb4b4b1bafe4491935f904c8: Status 404 returned error can't find the container with id 93a7af8791b638c34c71dfb294199317b60bbb44fb4b4b1bafe4491935f904c8 Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.465665 5113 generic.go:358] "Generic (PLEG): container finished" podID="9072aa3b-8694-444c-b9f1-087bcfe245e0" containerID="2c23499e8dc617e946bc18c1287fcba04e501a8261a1634cf7e86f4a1b17f6eb" exitCode=0 Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.465736 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" event={"ID":"9072aa3b-8694-444c-b9f1-087bcfe245e0","Type":"ContainerDied","Data":"2c23499e8dc617e946bc18c1287fcba04e501a8261a1634cf7e86f4a1b17f6eb"} Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.472458 5113 generic.go:358] "Generic (PLEG): container finished" podID="df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" containerID="d1c14fa1d93e0a2d7840ec6d264142ce396f8fd838d87715a8626f2f38490362" exitCode=0 Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.472737 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8t77" event={"ID":"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72","Type":"ContainerDied","Data":"d1c14fa1d93e0a2d7840ec6d264142ce396f8fd838d87715a8626f2f38490362"} Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.472765 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8t77" event={"ID":"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72","Type":"ContainerStarted","Data":"93a7af8791b638c34c71dfb294199317b60bbb44fb4b4b1bafe4491935f904c8"} Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.772833 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8"] Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.791292 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.791742 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8"] Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.892989 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw8cp\" (UniqueName: \"kubernetes.io/projected/017b8ede-2605-4ab7-81b2-155588352691-kube-api-access-pw8cp\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8\" (UID: \"017b8ede-2605-4ab7-81b2-155588352691\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.893072 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/017b8ede-2605-4ab7-81b2-155588352691-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8\" (UID: \"017b8ede-2605-4ab7-81b2-155588352691\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.893142 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/017b8ede-2605-4ab7-81b2-155588352691-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8\" (UID: \"017b8ede-2605-4ab7-81b2-155588352691\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.994895 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/017b8ede-2605-4ab7-81b2-155588352691-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8\" (UID: \"017b8ede-2605-4ab7-81b2-155588352691\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.994971 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pw8cp\" (UniqueName: \"kubernetes.io/projected/017b8ede-2605-4ab7-81b2-155588352691-kube-api-access-pw8cp\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8\" (UID: \"017b8ede-2605-4ab7-81b2-155588352691\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.995017 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/017b8ede-2605-4ab7-81b2-155588352691-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8\" (UID: \"017b8ede-2605-4ab7-81b2-155588352691\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.995440 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/017b8ede-2605-4ab7-81b2-155588352691-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8\" (UID: \"017b8ede-2605-4ab7-81b2-155588352691\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" Dec 12 14:24:20 crc kubenswrapper[5113]: I1212 14:24:20.995724 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/017b8ede-2605-4ab7-81b2-155588352691-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8\" (UID: \"017b8ede-2605-4ab7-81b2-155588352691\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" Dec 12 14:24:21 crc kubenswrapper[5113]: I1212 14:24:21.023680 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw8cp\" (UniqueName: \"kubernetes.io/projected/017b8ede-2605-4ab7-81b2-155588352691-kube-api-access-pw8cp\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8\" (UID: \"017b8ede-2605-4ab7-81b2-155588352691\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" Dec 12 14:24:21 crc kubenswrapper[5113]: I1212 14:24:21.104055 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" Dec 12 14:24:21 crc kubenswrapper[5113]: I1212 14:24:21.361604 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8"] Dec 12 14:24:21 crc kubenswrapper[5113]: I1212 14:24:21.481061 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" event={"ID":"017b8ede-2605-4ab7-81b2-155588352691","Type":"ContainerStarted","Data":"49a80ea62a357cab522705c7c6516df33a3ead102de6219a8b7f635800e25fb2"} Dec 12 14:24:21 crc kubenswrapper[5113]: I1212 14:24:21.483155 5113 generic.go:358] "Generic (PLEG): container finished" podID="9072aa3b-8694-444c-b9f1-087bcfe245e0" containerID="72944d86e024d6b24fc375691aae6fb8210f89885efaf3dde186d8fcf6fcfc84" exitCode=0 Dec 12 14:24:21 crc kubenswrapper[5113]: I1212 14:24:21.483188 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" event={"ID":"9072aa3b-8694-444c-b9f1-087bcfe245e0","Type":"ContainerDied","Data":"72944d86e024d6b24fc375691aae6fb8210f89885efaf3dde186d8fcf6fcfc84"} Dec 12 14:24:21 crc kubenswrapper[5113]: I1212 14:24:21.494464 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cqb6j"] Dec 12 14:24:21 crc kubenswrapper[5113]: I1212 14:24:21.494748 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cqb6j" podUID="b63f33e0-a188-4a65-a400-896ba010a800" containerName="registry-server" containerID="cri-o://7d1c640664aee25e023907f3304fe3af4274e396a6756e3509d98eda3cf19b85" gracePeriod=2 Dec 12 14:24:21 crc kubenswrapper[5113]: I1212 14:24:21.862738 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.008985 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b63f33e0-a188-4a65-a400-896ba010a800-catalog-content\") pod \"b63f33e0-a188-4a65-a400-896ba010a800\" (UID: \"b63f33e0-a188-4a65-a400-896ba010a800\") " Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.009054 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qx4g\" (UniqueName: \"kubernetes.io/projected/b63f33e0-a188-4a65-a400-896ba010a800-kube-api-access-4qx4g\") pod \"b63f33e0-a188-4a65-a400-896ba010a800\" (UID: \"b63f33e0-a188-4a65-a400-896ba010a800\") " Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.009247 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b63f33e0-a188-4a65-a400-896ba010a800-utilities\") pod \"b63f33e0-a188-4a65-a400-896ba010a800\" (UID: \"b63f33e0-a188-4a65-a400-896ba010a800\") " Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.010157 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b63f33e0-a188-4a65-a400-896ba010a800-utilities" (OuterVolumeSpecName: "utilities") pod "b63f33e0-a188-4a65-a400-896ba010a800" (UID: "b63f33e0-a188-4a65-a400-896ba010a800"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.010325 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b63f33e0-a188-4a65-a400-896ba010a800-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.019304 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b63f33e0-a188-4a65-a400-896ba010a800-kube-api-access-4qx4g" (OuterVolumeSpecName: "kube-api-access-4qx4g") pod "b63f33e0-a188-4a65-a400-896ba010a800" (UID: "b63f33e0-a188-4a65-a400-896ba010a800"). InnerVolumeSpecName "kube-api-access-4qx4g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.069234 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b63f33e0-a188-4a65-a400-896ba010a800-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b63f33e0-a188-4a65-a400-896ba010a800" (UID: "b63f33e0-a188-4a65-a400-896ba010a800"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.111649 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4qx4g\" (UniqueName: \"kubernetes.io/projected/b63f33e0-a188-4a65-a400-896ba010a800-kube-api-access-4qx4g\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.111693 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b63f33e0-a188-4a65-a400-896ba010a800-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.489815 5113 generic.go:358] "Generic (PLEG): container finished" podID="df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" containerID="7ab21a932099e61fcbe9fb7d8a647412985c7290dc5269fdf5efbe1a1e598119" exitCode=0 Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.489882 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8t77" event={"ID":"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72","Type":"ContainerDied","Data":"7ab21a932099e61fcbe9fb7d8a647412985c7290dc5269fdf5efbe1a1e598119"} Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.494437 5113 generic.go:358] "Generic (PLEG): container finished" podID="b63f33e0-a188-4a65-a400-896ba010a800" containerID="7d1c640664aee25e023907f3304fe3af4274e396a6756e3509d98eda3cf19b85" exitCode=0 Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.494560 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqb6j" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.494663 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqb6j" event={"ID":"b63f33e0-a188-4a65-a400-896ba010a800","Type":"ContainerDied","Data":"7d1c640664aee25e023907f3304fe3af4274e396a6756e3509d98eda3cf19b85"} Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.494710 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqb6j" event={"ID":"b63f33e0-a188-4a65-a400-896ba010a800","Type":"ContainerDied","Data":"b58d8e28da7c6bf02bfc9b8fe7fd5adbee37e8de63b898b530872872b0921a6c"} Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.494734 5113 scope.go:117] "RemoveContainer" containerID="7d1c640664aee25e023907f3304fe3af4274e396a6756e3509d98eda3cf19b85" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.496311 5113 generic.go:358] "Generic (PLEG): container finished" podID="017b8ede-2605-4ab7-81b2-155588352691" containerID="54c2275dca967900f427998df9ef017f1684b03509e96c516ecedfaba3493f4b" exitCode=0 Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.496372 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" event={"ID":"017b8ede-2605-4ab7-81b2-155588352691","Type":"ContainerDied","Data":"54c2275dca967900f427998df9ef017f1684b03509e96c516ecedfaba3493f4b"} Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.525549 5113 scope.go:117] "RemoveContainer" containerID="8ee9178ba16c65756e7673942f387b6fe83b2bf519c1b841f9640440712f95b9" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.550447 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cqb6j"] Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.557539 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cqb6j"] Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.564322 5113 scope.go:117] "RemoveContainer" containerID="38080a80352ba3c9710c08aa4cbf20eebae5c7d90984ea25bd807cd067c8cd3d" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.615026 5113 scope.go:117] "RemoveContainer" containerID="7d1c640664aee25e023907f3304fe3af4274e396a6756e3509d98eda3cf19b85" Dec 12 14:24:22 crc kubenswrapper[5113]: E1212 14:24:22.615653 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d1c640664aee25e023907f3304fe3af4274e396a6756e3509d98eda3cf19b85\": container with ID starting with 7d1c640664aee25e023907f3304fe3af4274e396a6756e3509d98eda3cf19b85 not found: ID does not exist" containerID="7d1c640664aee25e023907f3304fe3af4274e396a6756e3509d98eda3cf19b85" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.615686 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1c640664aee25e023907f3304fe3af4274e396a6756e3509d98eda3cf19b85"} err="failed to get container status \"7d1c640664aee25e023907f3304fe3af4274e396a6756e3509d98eda3cf19b85\": rpc error: code = NotFound desc = could not find container \"7d1c640664aee25e023907f3304fe3af4274e396a6756e3509d98eda3cf19b85\": container with ID starting with 7d1c640664aee25e023907f3304fe3af4274e396a6756e3509d98eda3cf19b85 not found: ID does not exist" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.615706 5113 scope.go:117] "RemoveContainer" containerID="8ee9178ba16c65756e7673942f387b6fe83b2bf519c1b841f9640440712f95b9" Dec 12 14:24:22 crc kubenswrapper[5113]: E1212 14:24:22.616509 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee9178ba16c65756e7673942f387b6fe83b2bf519c1b841f9640440712f95b9\": container with ID starting with 8ee9178ba16c65756e7673942f387b6fe83b2bf519c1b841f9640440712f95b9 not found: ID does not exist" containerID="8ee9178ba16c65756e7673942f387b6fe83b2bf519c1b841f9640440712f95b9" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.616533 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee9178ba16c65756e7673942f387b6fe83b2bf519c1b841f9640440712f95b9"} err="failed to get container status \"8ee9178ba16c65756e7673942f387b6fe83b2bf519c1b841f9640440712f95b9\": rpc error: code = NotFound desc = could not find container \"8ee9178ba16c65756e7673942f387b6fe83b2bf519c1b841f9640440712f95b9\": container with ID starting with 8ee9178ba16c65756e7673942f387b6fe83b2bf519c1b841f9640440712f95b9 not found: ID does not exist" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.616550 5113 scope.go:117] "RemoveContainer" containerID="38080a80352ba3c9710c08aa4cbf20eebae5c7d90984ea25bd807cd067c8cd3d" Dec 12 14:24:22 crc kubenswrapper[5113]: E1212 14:24:22.616855 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38080a80352ba3c9710c08aa4cbf20eebae5c7d90984ea25bd807cd067c8cd3d\": container with ID starting with 38080a80352ba3c9710c08aa4cbf20eebae5c7d90984ea25bd807cd067c8cd3d not found: ID does not exist" containerID="38080a80352ba3c9710c08aa4cbf20eebae5c7d90984ea25bd807cd067c8cd3d" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.616888 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38080a80352ba3c9710c08aa4cbf20eebae5c7d90984ea25bd807cd067c8cd3d"} err="failed to get container status \"38080a80352ba3c9710c08aa4cbf20eebae5c7d90984ea25bd807cd067c8cd3d\": rpc error: code = NotFound desc = could not find container \"38080a80352ba3c9710c08aa4cbf20eebae5c7d90984ea25bd807cd067c8cd3d\": container with ID starting with 38080a80352ba3c9710c08aa4cbf20eebae5c7d90984ea25bd807cd067c8cd3d not found: ID does not exist" Dec 12 14:24:22 crc kubenswrapper[5113]: I1212 14:24:22.873729 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.018209 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9072aa3b-8694-444c-b9f1-087bcfe245e0-util\") pod \"9072aa3b-8694-444c-b9f1-087bcfe245e0\" (UID: \"9072aa3b-8694-444c-b9f1-087bcfe245e0\") " Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.018316 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9072aa3b-8694-444c-b9f1-087bcfe245e0-bundle\") pod \"9072aa3b-8694-444c-b9f1-087bcfe245e0\" (UID: \"9072aa3b-8694-444c-b9f1-087bcfe245e0\") " Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.018449 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6mmn\" (UniqueName: \"kubernetes.io/projected/9072aa3b-8694-444c-b9f1-087bcfe245e0-kube-api-access-v6mmn\") pod \"9072aa3b-8694-444c-b9f1-087bcfe245e0\" (UID: \"9072aa3b-8694-444c-b9f1-087bcfe245e0\") " Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.020467 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9072aa3b-8694-444c-b9f1-087bcfe245e0-bundle" (OuterVolumeSpecName: "bundle") pod "9072aa3b-8694-444c-b9f1-087bcfe245e0" (UID: "9072aa3b-8694-444c-b9f1-087bcfe245e0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.023466 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9072aa3b-8694-444c-b9f1-087bcfe245e0-kube-api-access-v6mmn" (OuterVolumeSpecName: "kube-api-access-v6mmn") pod "9072aa3b-8694-444c-b9f1-087bcfe245e0" (UID: "9072aa3b-8694-444c-b9f1-087bcfe245e0"). InnerVolumeSpecName "kube-api-access-v6mmn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.120015 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v6mmn\" (UniqueName: \"kubernetes.io/projected/9072aa3b-8694-444c-b9f1-087bcfe245e0-kube-api-access-v6mmn\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.120049 5113 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9072aa3b-8694-444c-b9f1-087bcfe245e0-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.122761 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9072aa3b-8694-444c-b9f1-087bcfe245e0-util" (OuterVolumeSpecName: "util") pod "9072aa3b-8694-444c-b9f1-087bcfe245e0" (UID: "9072aa3b-8694-444c-b9f1-087bcfe245e0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.221530 5113 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9072aa3b-8694-444c-b9f1-087bcfe245e0-util\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.491775 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b63f33e0-a188-4a65-a400-896ba010a800" path="/var/lib/kubelet/pods/b63f33e0-a188-4a65-a400-896ba010a800/volumes" Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.501841 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8t77" event={"ID":"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72","Type":"ContainerStarted","Data":"940d551b72dcee0d987d06dd1450375155848498a04a165145ac4c6d57988fc7"} Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.506820 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" event={"ID":"9072aa3b-8694-444c-b9f1-087bcfe245e0","Type":"ContainerDied","Data":"e4b71c2ad90cd47e29e7320b727254a21b4b06cb689c783a32c29422d446ed9f"} Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.506868 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4b71c2ad90cd47e29e7320b727254a21b4b06cb689c783a32c29422d446ed9f" Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.506826 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef2w8w" Dec 12 14:24:23 crc kubenswrapper[5113]: I1212 14:24:23.518568 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r8t77" podStartSLOduration=4.708723703 podStartE2EDuration="5.518551631s" podCreationTimestamp="2025-12-12 14:24:18 +0000 UTC" firstStartedPulling="2025-12-12 14:24:20.473320915 +0000 UTC m=+843.308570742" lastFinishedPulling="2025-12-12 14:24:21.283148843 +0000 UTC m=+844.118398670" observedRunningTime="2025-12-12 14:24:23.517490697 +0000 UTC m=+846.352740544" watchObservedRunningTime="2025-12-12 14:24:23.518551631 +0000 UTC m=+846.353801458" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.230765 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-stwbz"] Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.232309 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9072aa3b-8694-444c-b9f1-087bcfe245e0" containerName="extract" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.232406 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="9072aa3b-8694-444c-b9f1-087bcfe245e0" containerName="extract" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.232521 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b63f33e0-a188-4a65-a400-896ba010a800" containerName="registry-server" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.232595 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63f33e0-a188-4a65-a400-896ba010a800" containerName="registry-server" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.232686 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b63f33e0-a188-4a65-a400-896ba010a800" containerName="extract-content" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.232773 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63f33e0-a188-4a65-a400-896ba010a800" containerName="extract-content" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.232848 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b63f33e0-a188-4a65-a400-896ba010a800" containerName="extract-utilities" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.232914 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63f33e0-a188-4a65-a400-896ba010a800" containerName="extract-utilities" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.232995 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9072aa3b-8694-444c-b9f1-087bcfe245e0" containerName="util" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.233068 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="9072aa3b-8694-444c-b9f1-087bcfe245e0" containerName="util" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.233187 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9072aa3b-8694-444c-b9f1-087bcfe245e0" containerName="pull" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.233265 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="9072aa3b-8694-444c-b9f1-087bcfe245e0" containerName="pull" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.233461 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="b63f33e0-a188-4a65-a400-896ba010a800" containerName="registry-server" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.233554 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="9072aa3b-8694-444c-b9f1-087bcfe245e0" containerName="extract" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.460858 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-stwbz"] Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.461251 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5"] Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.461019 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-stwbz" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.467079 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.467079 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-ncb2t\"" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.467598 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.468343 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82"] Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.468540 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.476977 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.477169 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-5ndwb\"" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.483338 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5"] Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.483716 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82"] Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.485667 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.528357 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-6kkdw"] Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.533800 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-6kkdw" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.536908 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0420eeb-cf28-424c-858d-f4d8e512c352-observability-operator-tls\") pod \"observability-operator-78c97476f4-6kkdw\" (UID: \"d0420eeb-cf28-424c-858d-f4d8e512c352\") " pod="openshift-operators/observability-operator-78c97476f4-6kkdw" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.536984 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s56q7\" (UniqueName: \"kubernetes.io/projected/d0420eeb-cf28-424c-858d-f4d8e512c352-kube-api-access-s56q7\") pod \"observability-operator-78c97476f4-6kkdw\" (UID: \"d0420eeb-cf28-424c-858d-f4d8e512c352\") " pod="openshift-operators/observability-operator-78c97476f4-6kkdw" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.537037 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/966932c8-f937-4880-96bd-67648f647780-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5\" (UID: \"966932c8-f937-4880-96bd-67648f647780\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.537144 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8rdg\" (UniqueName: \"kubernetes.io/projected/8dd0e3af-72f0-4f53-8275-292a415efa3e-kube-api-access-w8rdg\") pod \"obo-prometheus-operator-86648f486b-stwbz\" (UID: \"8dd0e3af-72f0-4f53-8275-292a415efa3e\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-stwbz" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.537215 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1786b562-d557-4c6e-a944-de8d80044d83-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82\" (UID: \"1786b562-d557-4c6e-a944-de8d80044d83\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.537328 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1786b562-d557-4c6e-a944-de8d80044d83-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82\" (UID: \"1786b562-d557-4c6e-a944-de8d80044d83\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.537375 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/966932c8-f937-4880-96bd-67648f647780-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5\" (UID: \"966932c8-f937-4880-96bd-67648f647780\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.553599 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.554051 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-6gzgl\"" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.561677 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-6kkdw"] Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.638156 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0420eeb-cf28-424c-858d-f4d8e512c352-observability-operator-tls\") pod \"observability-operator-78c97476f4-6kkdw\" (UID: \"d0420eeb-cf28-424c-858d-f4d8e512c352\") " pod="openshift-operators/observability-operator-78c97476f4-6kkdw" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.638925 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s56q7\" (UniqueName: \"kubernetes.io/projected/d0420eeb-cf28-424c-858d-f4d8e512c352-kube-api-access-s56q7\") pod \"observability-operator-78c97476f4-6kkdw\" (UID: \"d0420eeb-cf28-424c-858d-f4d8e512c352\") " pod="openshift-operators/observability-operator-78c97476f4-6kkdw" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.638975 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/966932c8-f937-4880-96bd-67648f647780-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5\" (UID: \"966932c8-f937-4880-96bd-67648f647780\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.639012 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w8rdg\" (UniqueName: \"kubernetes.io/projected/8dd0e3af-72f0-4f53-8275-292a415efa3e-kube-api-access-w8rdg\") pod \"obo-prometheus-operator-86648f486b-stwbz\" (UID: \"8dd0e3af-72f0-4f53-8275-292a415efa3e\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-stwbz" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.639041 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1786b562-d557-4c6e-a944-de8d80044d83-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82\" (UID: \"1786b562-d557-4c6e-a944-de8d80044d83\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.639081 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1786b562-d557-4c6e-a944-de8d80044d83-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82\" (UID: \"1786b562-d557-4c6e-a944-de8d80044d83\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.639104 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/966932c8-f937-4880-96bd-67648f647780-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5\" (UID: \"966932c8-f937-4880-96bd-67648f647780\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.643735 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1786b562-d557-4c6e-a944-de8d80044d83-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82\" (UID: \"1786b562-d557-4c6e-a944-de8d80044d83\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.644375 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/966932c8-f937-4880-96bd-67648f647780-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5\" (UID: \"966932c8-f937-4880-96bd-67648f647780\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.645712 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/966932c8-f937-4880-96bd-67648f647780-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5\" (UID: \"966932c8-f937-4880-96bd-67648f647780\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.655624 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1786b562-d557-4c6e-a944-de8d80044d83-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82\" (UID: \"1786b562-d557-4c6e-a944-de8d80044d83\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.659983 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8rdg\" (UniqueName: \"kubernetes.io/projected/8dd0e3af-72f0-4f53-8275-292a415efa3e-kube-api-access-w8rdg\") pod \"obo-prometheus-operator-86648f486b-stwbz\" (UID: \"8dd0e3af-72f0-4f53-8275-292a415efa3e\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-stwbz" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.668514 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s56q7\" (UniqueName: \"kubernetes.io/projected/d0420eeb-cf28-424c-858d-f4d8e512c352-kube-api-access-s56q7\") pod \"observability-operator-78c97476f4-6kkdw\" (UID: \"d0420eeb-cf28-424c-858d-f4d8e512c352\") " pod="openshift-operators/observability-operator-78c97476f4-6kkdw" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.680969 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0420eeb-cf28-424c-858d-f4d8e512c352-observability-operator-tls\") pod \"observability-operator-78c97476f4-6kkdw\" (UID: \"d0420eeb-cf28-424c-858d-f4d8e512c352\") " pod="openshift-operators/observability-operator-78c97476f4-6kkdw" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.681631 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-dc6sm"] Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.693980 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-dc6sm" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.696317 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-wx6xm\"" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.700351 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-dc6sm"] Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.740007 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wvzn\" (UniqueName: \"kubernetes.io/projected/3ec6f3e9-0198-4e39-a510-1bdaa8a9b602-kube-api-access-4wvzn\") pod \"perses-operator-68bdb49cbf-dc6sm\" (UID: \"3ec6f3e9-0198-4e39-a510-1bdaa8a9b602\") " pod="openshift-operators/perses-operator-68bdb49cbf-dc6sm" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.741618 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ec6f3e9-0198-4e39-a510-1bdaa8a9b602-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-dc6sm\" (UID: \"3ec6f3e9-0198-4e39-a510-1bdaa8a9b602\") " pod="openshift-operators/perses-operator-68bdb49cbf-dc6sm" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.806685 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-stwbz" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.842701 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4wvzn\" (UniqueName: \"kubernetes.io/projected/3ec6f3e9-0198-4e39-a510-1bdaa8a9b602-kube-api-access-4wvzn\") pod \"perses-operator-68bdb49cbf-dc6sm\" (UID: \"3ec6f3e9-0198-4e39-a510-1bdaa8a9b602\") " pod="openshift-operators/perses-operator-68bdb49cbf-dc6sm" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.842750 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ec6f3e9-0198-4e39-a510-1bdaa8a9b602-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-dc6sm\" (UID: \"3ec6f3e9-0198-4e39-a510-1bdaa8a9b602\") " pod="openshift-operators/perses-operator-68bdb49cbf-dc6sm" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.843659 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ec6f3e9-0198-4e39-a510-1bdaa8a9b602-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-dc6sm\" (UID: \"3ec6f3e9-0198-4e39-a510-1bdaa8a9b602\") " pod="openshift-operators/perses-operator-68bdb49cbf-dc6sm" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.891137 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.891270 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-6kkdw" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.891384 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82" Dec 12 14:24:24 crc kubenswrapper[5113]: I1212 14:24:24.913767 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wvzn\" (UniqueName: \"kubernetes.io/projected/3ec6f3e9-0198-4e39-a510-1bdaa8a9b602-kube-api-access-4wvzn\") pod \"perses-operator-68bdb49cbf-dc6sm\" (UID: \"3ec6f3e9-0198-4e39-a510-1bdaa8a9b602\") " pod="openshift-operators/perses-operator-68bdb49cbf-dc6sm" Dec 12 14:24:25 crc kubenswrapper[5113]: I1212 14:24:25.019437 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-dc6sm" Dec 12 14:24:25 crc kubenswrapper[5113]: I1212 14:24:25.229314 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82"] Dec 12 14:24:25 crc kubenswrapper[5113]: I1212 14:24:25.356982 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-6kkdw"] Dec 12 14:24:25 crc kubenswrapper[5113]: W1212 14:24:25.364627 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0420eeb_cf28_424c_858d_f4d8e512c352.slice/crio-7979c252200ee7077f4b0e1608c3e0f5ac8288ee72dfc2b2739af513dca13a8f WatchSource:0}: Error finding container 7979c252200ee7077f4b0e1608c3e0f5ac8288ee72dfc2b2739af513dca13a8f: Status 404 returned error can't find the container with id 7979c252200ee7077f4b0e1608c3e0f5ac8288ee72dfc2b2739af513dca13a8f Dec 12 14:24:25 crc kubenswrapper[5113]: I1212 14:24:25.383638 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-dc6sm"] Dec 12 14:24:25 crc kubenswrapper[5113]: W1212 14:24:25.387312 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ec6f3e9_0198_4e39_a510_1bdaa8a9b602.slice/crio-e0e8c4ef92492890d259f9b2d26ecaa2196f445319d1dd302c9cf37f366e24be WatchSource:0}: Error finding container e0e8c4ef92492890d259f9b2d26ecaa2196f445319d1dd302c9cf37f366e24be: Status 404 returned error can't find the container with id e0e8c4ef92492890d259f9b2d26ecaa2196f445319d1dd302c9cf37f366e24be Dec 12 14:24:25 crc kubenswrapper[5113]: I1212 14:24:25.466852 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-stwbz"] Dec 12 14:24:25 crc kubenswrapper[5113]: W1212 14:24:25.475534 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dd0e3af_72f0_4f53_8275_292a415efa3e.slice/crio-447069ae6b52825974111cc4e8dfa1440c0253872be04069395bb1be36640fce WatchSource:0}: Error finding container 447069ae6b52825974111cc4e8dfa1440c0253872be04069395bb1be36640fce: Status 404 returned error can't find the container with id 447069ae6b52825974111cc4e8dfa1440c0253872be04069395bb1be36640fce Dec 12 14:24:25 crc kubenswrapper[5113]: I1212 14:24:25.477706 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5"] Dec 12 14:24:25 crc kubenswrapper[5113]: W1212 14:24:25.496007 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod966932c8_f937_4880_96bd_67648f647780.slice/crio-f0cff41679fcf8398c9f35b335dfc6d9c1515cc30b7de662c6fbbd43ffb32f76 WatchSource:0}: Error finding container f0cff41679fcf8398c9f35b335dfc6d9c1515cc30b7de662c6fbbd43ffb32f76: Status 404 returned error can't find the container with id f0cff41679fcf8398c9f35b335dfc6d9c1515cc30b7de662c6fbbd43ffb32f76 Dec 12 14:24:25 crc kubenswrapper[5113]: I1212 14:24:25.521589 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-dc6sm" event={"ID":"3ec6f3e9-0198-4e39-a510-1bdaa8a9b602","Type":"ContainerStarted","Data":"e0e8c4ef92492890d259f9b2d26ecaa2196f445319d1dd302c9cf37f366e24be"} Dec 12 14:24:25 crc kubenswrapper[5113]: I1212 14:24:25.524344 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5" event={"ID":"966932c8-f937-4880-96bd-67648f647780","Type":"ContainerStarted","Data":"f0cff41679fcf8398c9f35b335dfc6d9c1515cc30b7de662c6fbbd43ffb32f76"} Dec 12 14:24:25 crc kubenswrapper[5113]: I1212 14:24:25.525503 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-6kkdw" event={"ID":"d0420eeb-cf28-424c-858d-f4d8e512c352","Type":"ContainerStarted","Data":"7979c252200ee7077f4b0e1608c3e0f5ac8288ee72dfc2b2739af513dca13a8f"} Dec 12 14:24:25 crc kubenswrapper[5113]: I1212 14:24:25.526591 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82" event={"ID":"1786b562-d557-4c6e-a944-de8d80044d83","Type":"ContainerStarted","Data":"89a635ac823e3cab32401bbac77916ebede620c9fb9515db10c3f3a401b31944"} Dec 12 14:24:25 crc kubenswrapper[5113]: I1212 14:24:25.527401 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-stwbz" event={"ID":"8dd0e3af-72f0-4f53-8275-292a415efa3e","Type":"ContainerStarted","Data":"447069ae6b52825974111cc4e8dfa1440c0253872be04069395bb1be36640fce"} Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.520694 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.521080 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.650038 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.711359 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.796647 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-785787bf7-njsr9"] Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.874383 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-785787bf7-njsr9"] Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.874564 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-785787bf7-njsr9" Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.879729 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.879770 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.880009 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-k224g\"" Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.880215 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.973810 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0852bd98-d91e-4a32-a760-1750e57937c1-webhook-cert\") pod \"elastic-operator-785787bf7-njsr9\" (UID: \"0852bd98-d91e-4a32-a760-1750e57937c1\") " pod="service-telemetry/elastic-operator-785787bf7-njsr9" Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.973875 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjt22\" (UniqueName: \"kubernetes.io/projected/0852bd98-d91e-4a32-a760-1750e57937c1-kube-api-access-hjt22\") pod \"elastic-operator-785787bf7-njsr9\" (UID: \"0852bd98-d91e-4a32-a760-1750e57937c1\") " pod="service-telemetry/elastic-operator-785787bf7-njsr9" Dec 12 14:24:29 crc kubenswrapper[5113]: I1212 14:24:29.973927 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0852bd98-d91e-4a32-a760-1750e57937c1-apiservice-cert\") pod \"elastic-operator-785787bf7-njsr9\" (UID: \"0852bd98-d91e-4a32-a760-1750e57937c1\") " pod="service-telemetry/elastic-operator-785787bf7-njsr9" Dec 12 14:24:30 crc kubenswrapper[5113]: I1212 14:24:30.075584 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0852bd98-d91e-4a32-a760-1750e57937c1-webhook-cert\") pod \"elastic-operator-785787bf7-njsr9\" (UID: \"0852bd98-d91e-4a32-a760-1750e57937c1\") " pod="service-telemetry/elastic-operator-785787bf7-njsr9" Dec 12 14:24:30 crc kubenswrapper[5113]: I1212 14:24:30.075649 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hjt22\" (UniqueName: \"kubernetes.io/projected/0852bd98-d91e-4a32-a760-1750e57937c1-kube-api-access-hjt22\") pod \"elastic-operator-785787bf7-njsr9\" (UID: \"0852bd98-d91e-4a32-a760-1750e57937c1\") " pod="service-telemetry/elastic-operator-785787bf7-njsr9" Dec 12 14:24:30 crc kubenswrapper[5113]: I1212 14:24:30.075712 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0852bd98-d91e-4a32-a760-1750e57937c1-apiservice-cert\") pod \"elastic-operator-785787bf7-njsr9\" (UID: \"0852bd98-d91e-4a32-a760-1750e57937c1\") " pod="service-telemetry/elastic-operator-785787bf7-njsr9" Dec 12 14:24:30 crc kubenswrapper[5113]: I1212 14:24:30.082342 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0852bd98-d91e-4a32-a760-1750e57937c1-apiservice-cert\") pod \"elastic-operator-785787bf7-njsr9\" (UID: \"0852bd98-d91e-4a32-a760-1750e57937c1\") " pod="service-telemetry/elastic-operator-785787bf7-njsr9" Dec 12 14:24:30 crc kubenswrapper[5113]: I1212 14:24:30.085377 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0852bd98-d91e-4a32-a760-1750e57937c1-webhook-cert\") pod \"elastic-operator-785787bf7-njsr9\" (UID: \"0852bd98-d91e-4a32-a760-1750e57937c1\") " pod="service-telemetry/elastic-operator-785787bf7-njsr9" Dec 12 14:24:30 crc kubenswrapper[5113]: I1212 14:24:30.105097 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjt22\" (UniqueName: \"kubernetes.io/projected/0852bd98-d91e-4a32-a760-1750e57937c1-kube-api-access-hjt22\") pod \"elastic-operator-785787bf7-njsr9\" (UID: \"0852bd98-d91e-4a32-a760-1750e57937c1\") " pod="service-telemetry/elastic-operator-785787bf7-njsr9" Dec 12 14:24:30 crc kubenswrapper[5113]: I1212 14:24:30.211767 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-785787bf7-njsr9" Dec 12 14:24:32 crc kubenswrapper[5113]: I1212 14:24:32.293455 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r8t77"] Dec 12 14:24:32 crc kubenswrapper[5113]: I1212 14:24:32.293938 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r8t77" podUID="df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" containerName="registry-server" containerID="cri-o://940d551b72dcee0d987d06dd1450375155848498a04a165145ac4c6d57988fc7" gracePeriod=2 Dec 12 14:24:32 crc kubenswrapper[5113]: I1212 14:24:32.607965 5113 generic.go:358] "Generic (PLEG): container finished" podID="df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" containerID="940d551b72dcee0d987d06dd1450375155848498a04a165145ac4c6d57988fc7" exitCode=0 Dec 12 14:24:32 crc kubenswrapper[5113]: I1212 14:24:32.608973 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8t77" event={"ID":"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72","Type":"ContainerDied","Data":"940d551b72dcee0d987d06dd1450375155848498a04a165145ac4c6d57988fc7"} Dec 12 14:24:39 crc kubenswrapper[5113]: E1212 14:24:39.651989 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 940d551b72dcee0d987d06dd1450375155848498a04a165145ac4c6d57988fc7 is running failed: container process not found" containerID="940d551b72dcee0d987d06dd1450375155848498a04a165145ac4c6d57988fc7" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 14:24:39 crc kubenswrapper[5113]: E1212 14:24:39.653009 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 940d551b72dcee0d987d06dd1450375155848498a04a165145ac4c6d57988fc7 is running failed: container process not found" containerID="940d551b72dcee0d987d06dd1450375155848498a04a165145ac4c6d57988fc7" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 14:24:39 crc kubenswrapper[5113]: E1212 14:24:39.653247 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 940d551b72dcee0d987d06dd1450375155848498a04a165145ac4c6d57988fc7 is running failed: container process not found" containerID="940d551b72dcee0d987d06dd1450375155848498a04a165145ac4c6d57988fc7" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 14:24:39 crc kubenswrapper[5113]: E1212 14:24:39.653298 5113 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 940d551b72dcee0d987d06dd1450375155848498a04a165145ac4c6d57988fc7 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-r8t77" podUID="df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" containerName="registry-server" probeResult="unknown" Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.535544 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.711878 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxxk8\" (UniqueName: \"kubernetes.io/projected/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-kube-api-access-xxxk8\") pod \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\" (UID: \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\") " Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.711998 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-catalog-content\") pod \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\" (UID: \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\") " Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.712035 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-utilities\") pod \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\" (UID: \"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72\") " Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.713521 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-utilities" (OuterVolumeSpecName: "utilities") pod "df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" (UID: "df5f15d1-7e48-4cd4-96a3-ac5f8b725d72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.717724 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8t77" Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.717739 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8t77" event={"ID":"df5f15d1-7e48-4cd4-96a3-ac5f8b725d72","Type":"ContainerDied","Data":"93a7af8791b638c34c71dfb294199317b60bbb44fb4b4b1bafe4491935f904c8"} Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.717794 5113 scope.go:117] "RemoveContainer" containerID="940d551b72dcee0d987d06dd1450375155848498a04a165145ac4c6d57988fc7" Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.733321 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-kube-api-access-xxxk8" (OuterVolumeSpecName: "kube-api-access-xxxk8") pod "df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" (UID: "df5f15d1-7e48-4cd4-96a3-ac5f8b725d72"). InnerVolumeSpecName "kube-api-access-xxxk8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.775579 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" (UID: "df5f15d1-7e48-4cd4-96a3-ac5f8b725d72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.782355 5113 scope.go:117] "RemoveContainer" containerID="7ab21a932099e61fcbe9fb7d8a647412985c7290dc5269fdf5efbe1a1e598119" Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.812917 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.812958 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.812975 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxxk8\" (UniqueName: \"kubernetes.io/projected/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72-kube-api-access-xxxk8\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:44 crc kubenswrapper[5113]: I1212 14:24:44.865618 5113 scope.go:117] "RemoveContainer" containerID="d1c14fa1d93e0a2d7840ec6d264142ce396f8fd838d87715a8626f2f38490362" Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.044551 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r8t77"] Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.051775 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r8t77"] Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.097748 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-785787bf7-njsr9"] Dec 12 14:24:45 crc kubenswrapper[5113]: W1212 14:24:45.102462 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0852bd98_d91e_4a32_a760_1750e57937c1.slice/crio-62b1611f15985613d4abb95d2b0c8484825aac3246e58511156b61bacb5cd28c WatchSource:0}: Error finding container 62b1611f15985613d4abb95d2b0c8484825aac3246e58511156b61bacb5cd28c: Status 404 returned error can't find the container with id 62b1611f15985613d4abb95d2b0c8484825aac3246e58511156b61bacb5cd28c Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.490787 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" path="/var/lib/kubelet/pods/df5f15d1-7e48-4cd4-96a3-ac5f8b725d72/volumes" Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.725586 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-stwbz" event={"ID":"8dd0e3af-72f0-4f53-8275-292a415efa3e","Type":"ContainerStarted","Data":"16e28df278a94a935ce3a8043035e5873453dc7cedbab85dbba6459e9472bf35"} Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.727233 5113 generic.go:358] "Generic (PLEG): container finished" podID="017b8ede-2605-4ab7-81b2-155588352691" containerID="d42a06991f05faa721e6c8879ae8d669d7036c183bbae86803cde490cfa9e514" exitCode=0 Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.727332 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" event={"ID":"017b8ede-2605-4ab7-81b2-155588352691","Type":"ContainerDied","Data":"d42a06991f05faa721e6c8879ae8d669d7036c183bbae86803cde490cfa9e514"} Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.729463 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-dc6sm" event={"ID":"3ec6f3e9-0198-4e39-a510-1bdaa8a9b602","Type":"ContainerStarted","Data":"78a330cbeaf9ea8b227782b491fd51b75b098b6334df66e330c6ed2ae3ca2a4b"} Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.729570 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-dc6sm" Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.731818 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5" event={"ID":"966932c8-f937-4880-96bd-67648f647780","Type":"ContainerStarted","Data":"148e164aae2feced990e25a2c25bc5a89796c7369a0eb7e0e73f8d8d58bb405b"} Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.733546 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-6kkdw" event={"ID":"d0420eeb-cf28-424c-858d-f4d8e512c352","Type":"ContainerStarted","Data":"cb13aabd122ffab63b268e4ad8b7e2f4e1908ca7eec4b83763a1accea11ddfef"} Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.734977 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-6kkdw" Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.744554 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-6kkdw" Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.752741 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-stwbz" podStartSLOduration=2.494106408 podStartE2EDuration="21.752718181s" podCreationTimestamp="2025-12-12 14:24:24 +0000 UTC" firstStartedPulling="2025-12-12 14:24:25.478775252 +0000 UTC m=+848.314025079" lastFinishedPulling="2025-12-12 14:24:44.737387025 +0000 UTC m=+867.572636852" observedRunningTime="2025-12-12 14:24:45.744829552 +0000 UTC m=+868.580079409" watchObservedRunningTime="2025-12-12 14:24:45.752718181 +0000 UTC m=+868.587968018" Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.756953 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82" event={"ID":"1786b562-d557-4c6e-a944-de8d80044d83","Type":"ContainerStarted","Data":"460e38b145799e3e04dead75d5eed2730f6268f13e770db2b218d2d904e39e65"} Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.761313 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-785787bf7-njsr9" event={"ID":"0852bd98-d91e-4a32-a760-1750e57937c1","Type":"ContainerStarted","Data":"62b1611f15985613d4abb95d2b0c8484825aac3246e58511156b61bacb5cd28c"} Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.773239 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-rtxz5" podStartSLOduration=2.628421987 podStartE2EDuration="21.773223696s" podCreationTimestamp="2025-12-12 14:24:24 +0000 UTC" firstStartedPulling="2025-12-12 14:24:25.498823663 +0000 UTC m=+848.334073490" lastFinishedPulling="2025-12-12 14:24:44.643625372 +0000 UTC m=+867.478875199" observedRunningTime="2025-12-12 14:24:45.769140908 +0000 UTC m=+868.604390755" watchObservedRunningTime="2025-12-12 14:24:45.773223696 +0000 UTC m=+868.608473523" Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.809266 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-dc6sm" podStartSLOduration=2.542002756 podStartE2EDuration="21.809248502s" podCreationTimestamp="2025-12-12 14:24:24 +0000 UTC" firstStartedPulling="2025-12-12 14:24:25.392297207 +0000 UTC m=+848.227547034" lastFinishedPulling="2025-12-12 14:24:44.659542953 +0000 UTC m=+867.494792780" observedRunningTime="2025-12-12 14:24:45.804550903 +0000 UTC m=+868.639800760" watchObservedRunningTime="2025-12-12 14:24:45.809248502 +0000 UTC m=+868.644498329" Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.827539 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-6kkdw" podStartSLOduration=2.58945566 podStartE2EDuration="21.827526397s" podCreationTimestamp="2025-12-12 14:24:24 +0000 UTC" firstStartedPulling="2025-12-12 14:24:25.372221075 +0000 UTC m=+848.207470902" lastFinishedPulling="2025-12-12 14:24:44.610291812 +0000 UTC m=+867.445541639" observedRunningTime="2025-12-12 14:24:45.825445682 +0000 UTC m=+868.660695519" watchObservedRunningTime="2025-12-12 14:24:45.827526397 +0000 UTC m=+868.662776224" Dec 12 14:24:45 crc kubenswrapper[5113]: I1212 14:24:45.876906 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bb946d9b9-k6f82" podStartSLOduration=2.481088705 podStartE2EDuration="21.876885721s" podCreationTimestamp="2025-12-12 14:24:24 +0000 UTC" firstStartedPulling="2025-12-12 14:24:25.248360433 +0000 UTC m=+848.083610250" lastFinishedPulling="2025-12-12 14:24:44.644157439 +0000 UTC m=+867.479407266" observedRunningTime="2025-12-12 14:24:45.871352597 +0000 UTC m=+868.706602434" watchObservedRunningTime="2025-12-12 14:24:45.876885721 +0000 UTC m=+868.712135548" Dec 12 14:24:46 crc kubenswrapper[5113]: I1212 14:24:46.768614 5113 generic.go:358] "Generic (PLEG): container finished" podID="017b8ede-2605-4ab7-81b2-155588352691" containerID="4dc0d6a73854403a91c98759f283d5e10a4746af4289233051103402201fd5bd" exitCode=0 Dec 12 14:24:46 crc kubenswrapper[5113]: I1212 14:24:46.768664 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" event={"ID":"017b8ede-2605-4ab7-81b2-155588352691","Type":"ContainerDied","Data":"4dc0d6a73854403a91c98759f283d5e10a4746af4289233051103402201fd5bd"} Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.509904 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.557107 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw8cp\" (UniqueName: \"kubernetes.io/projected/017b8ede-2605-4ab7-81b2-155588352691-kube-api-access-pw8cp\") pod \"017b8ede-2605-4ab7-81b2-155588352691\" (UID: \"017b8ede-2605-4ab7-81b2-155588352691\") " Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.557205 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/017b8ede-2605-4ab7-81b2-155588352691-util\") pod \"017b8ede-2605-4ab7-81b2-155588352691\" (UID: \"017b8ede-2605-4ab7-81b2-155588352691\") " Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.557286 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/017b8ede-2605-4ab7-81b2-155588352691-bundle\") pod \"017b8ede-2605-4ab7-81b2-155588352691\" (UID: \"017b8ede-2605-4ab7-81b2-155588352691\") " Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.559187 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/017b8ede-2605-4ab7-81b2-155588352691-bundle" (OuterVolumeSpecName: "bundle") pod "017b8ede-2605-4ab7-81b2-155588352691" (UID: "017b8ede-2605-4ab7-81b2-155588352691"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.565650 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/017b8ede-2605-4ab7-81b2-155588352691-kube-api-access-pw8cp" (OuterVolumeSpecName: "kube-api-access-pw8cp") pod "017b8ede-2605-4ab7-81b2-155588352691" (UID: "017b8ede-2605-4ab7-81b2-155588352691"). InnerVolumeSpecName "kube-api-access-pw8cp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.565891 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/017b8ede-2605-4ab7-81b2-155588352691-util" (OuterVolumeSpecName: "util") pod "017b8ede-2605-4ab7-81b2-155588352691" (UID: "017b8ede-2605-4ab7-81b2-155588352691"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.658951 5113 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/017b8ede-2605-4ab7-81b2-155588352691-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.658996 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pw8cp\" (UniqueName: \"kubernetes.io/projected/017b8ede-2605-4ab7-81b2-155588352691-kube-api-access-pw8cp\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.659010 5113 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/017b8ede-2605-4ab7-81b2-155588352691-util\") on node \"crc\" DevicePath \"\"" Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.785206 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.785194 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a4c6t8" event={"ID":"017b8ede-2605-4ab7-81b2-155588352691","Type":"ContainerDied","Data":"49a80ea62a357cab522705c7c6516df33a3ead102de6219a8b7f635800e25fb2"} Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.785559 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49a80ea62a357cab522705c7c6516df33a3ead102de6219a8b7f635800e25fb2" Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.786977 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-785787bf7-njsr9" event={"ID":"0852bd98-d91e-4a32-a760-1750e57937c1","Type":"ContainerStarted","Data":"e41afd94965be37d8fcc5b8a0b59ae98f827ffafca1ea7623c9fcc31704b56ce"} Dec 12 14:24:48 crc kubenswrapper[5113]: I1212 14:24:48.809656 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-785787bf7-njsr9" podStartSLOduration=16.412295518 podStartE2EDuration="19.80963778s" podCreationTimestamp="2025-12-12 14:24:29 +0000 UTC" firstStartedPulling="2025-12-12 14:24:45.10542861 +0000 UTC m=+867.940678437" lastFinishedPulling="2025-12-12 14:24:48.502770872 +0000 UTC m=+871.338020699" observedRunningTime="2025-12-12 14:24:48.805718866 +0000 UTC m=+871.640968703" watchObservedRunningTime="2025-12-12 14:24:48.80963778 +0000 UTC m=+871.644887607" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.741911 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.742801 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" containerName="extract-content" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.742820 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" containerName="extract-content" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.742837 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="017b8ede-2605-4ab7-81b2-155588352691" containerName="util" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.742843 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="017b8ede-2605-4ab7-81b2-155588352691" containerName="util" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.742865 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" containerName="extract-utilities" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.742870 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" containerName="extract-utilities" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.742880 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="017b8ede-2605-4ab7-81b2-155588352691" containerName="extract" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.742887 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="017b8ede-2605-4ab7-81b2-155588352691" containerName="extract" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.742901 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" containerName="registry-server" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.742906 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" containerName="registry-server" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.742913 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="017b8ede-2605-4ab7-81b2-155588352691" containerName="pull" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.742918 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="017b8ede-2605-4ab7-81b2-155588352691" containerName="pull" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.743011 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="df5f15d1-7e48-4cd4-96a3-ac5f8b725d72" containerName="registry-server" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.743023 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="017b8ede-2605-4ab7-81b2-155588352691" containerName="extract" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.905923 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.906190 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.909231 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.909298 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.909423 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.910562 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-xs99f\"" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.911180 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.911229 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.911281 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.911430 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.911869 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979056 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979092 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979169 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979202 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979229 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979327 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979399 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979428 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979468 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979565 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c7dc263c-4149-4dec-8ee6-614b64d28491-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979642 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979671 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979687 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979717 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:49 crc kubenswrapper[5113]: I1212 14:24:49.979740 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.080944 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.081020 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.081066 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.081108 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.081179 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.081226 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.081275 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.081604 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.081681 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.081705 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.081766 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.081803 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.081883 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c7dc263c-4149-4dec-8ee6-614b64d28491-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.081964 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.082009 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.082040 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.082052 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.082091 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.082340 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.082366 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.082674 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.083132 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.084078 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.085697 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.085704 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.085847 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.086382 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.088355 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.089212 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c7dc263c-4149-4dec-8ee6-614b64d28491-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.094842 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c7dc263c-4149-4dec-8ee6-614b64d28491-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c7dc263c-4149-4dec-8ee6-614b64d28491\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.220918 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.494540 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.797047 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c7dc263c-4149-4dec-8ee6-614b64d28491","Type":"ContainerStarted","Data":"efe85c2ca11323bbcdcee2046954c4c278f0dcadf63b9b18045cda56cdbefadb"} Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.902221 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:24:50 crc kubenswrapper[5113]: I1212 14:24:50.902322 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.076098 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb"] Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.220574 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb"] Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.220731 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb" Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.223333 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.235705 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-jpkx9\"" Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.235773 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.295162 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e7a0db0-811b-4948-8645-cc6bea64f419-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-rr2bb\" (UID: \"0e7a0db0-811b-4948-8645-cc6bea64f419\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb" Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.295273 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5x9x\" (UniqueName: \"kubernetes.io/projected/0e7a0db0-811b-4948-8645-cc6bea64f419-kube-api-access-m5x9x\") pod \"cert-manager-operator-controller-manager-64c74584c4-rr2bb\" (UID: \"0e7a0db0-811b-4948-8645-cc6bea64f419\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb" Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.396237 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m5x9x\" (UniqueName: \"kubernetes.io/projected/0e7a0db0-811b-4948-8645-cc6bea64f419-kube-api-access-m5x9x\") pod \"cert-manager-operator-controller-manager-64c74584c4-rr2bb\" (UID: \"0e7a0db0-811b-4948-8645-cc6bea64f419\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb" Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.396779 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e7a0db0-811b-4948-8645-cc6bea64f419-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-rr2bb\" (UID: \"0e7a0db0-811b-4948-8645-cc6bea64f419\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb" Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.397251 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e7a0db0-811b-4948-8645-cc6bea64f419-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-rr2bb\" (UID: \"0e7a0db0-811b-4948-8645-cc6bea64f419\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb" Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.418031 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5x9x\" (UniqueName: \"kubernetes.io/projected/0e7a0db0-811b-4948-8645-cc6bea64f419-kube-api-access-m5x9x\") pod \"cert-manager-operator-controller-manager-64c74584c4-rr2bb\" (UID: \"0e7a0db0-811b-4948-8645-cc6bea64f419\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb" Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.543195 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb" Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.761588 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb"] Dec 12 14:24:56 crc kubenswrapper[5113]: W1212 14:24:56.777078 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e7a0db0_811b_4948_8645_cc6bea64f419.slice/crio-44e2be6f6fb8c3e30b342f25747c662f053a91b6d01bf1f52ae8574ac82041a5 WatchSource:0}: Error finding container 44e2be6f6fb8c3e30b342f25747c662f053a91b6d01bf1f52ae8574ac82041a5: Status 404 returned error can't find the container with id 44e2be6f6fb8c3e30b342f25747c662f053a91b6d01bf1f52ae8574ac82041a5 Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.781460 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-dc6sm" Dec 12 14:24:56 crc kubenswrapper[5113]: I1212 14:24:56.844186 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb" event={"ID":"0e7a0db0-811b-4948-8645-cc6bea64f419","Type":"ContainerStarted","Data":"44e2be6f6fb8c3e30b342f25747c662f053a91b6d01bf1f52ae8574ac82041a5"} Dec 12 14:25:05 crc kubenswrapper[5113]: I1212 14:25:05.907779 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c7dc263c-4149-4dec-8ee6-614b64d28491","Type":"ContainerStarted","Data":"98ab14585a185afece62f537694683a037f86c311b8fc8a742ce26c8ec9b0790"} Dec 12 14:25:05 crc kubenswrapper[5113]: I1212 14:25:05.909503 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb" event={"ID":"0e7a0db0-811b-4948-8645-cc6bea64f419","Type":"ContainerStarted","Data":"1ac0a97243a5e2a3f0233fec3a608ef2a910a8435977045828bd323279a0e6aa"} Dec 12 14:25:05 crc kubenswrapper[5113]: I1212 14:25:05.961200 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rr2bb" podStartSLOduration=1.7389485310000001 podStartE2EDuration="9.961177486s" podCreationTimestamp="2025-12-12 14:24:56 +0000 UTC" firstStartedPulling="2025-12-12 14:24:56.778747362 +0000 UTC m=+879.613997189" lastFinishedPulling="2025-12-12 14:25:05.000976307 +0000 UTC m=+887.836226144" observedRunningTime="2025-12-12 14:25:05.958162891 +0000 UTC m=+888.793412728" watchObservedRunningTime="2025-12-12 14:25:05.961177486 +0000 UTC m=+888.796427313" Dec 12 14:25:06 crc kubenswrapper[5113]: I1212 14:25:06.037063 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 14:25:06 crc kubenswrapper[5113]: I1212 14:25:06.073969 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 14:25:07 crc kubenswrapper[5113]: I1212 14:25:07.921049 5113 generic.go:358] "Generic (PLEG): container finished" podID="c7dc263c-4149-4dec-8ee6-614b64d28491" containerID="98ab14585a185afece62f537694683a037f86c311b8fc8a742ce26c8ec9b0790" exitCode=0 Dec 12 14:25:07 crc kubenswrapper[5113]: I1212 14:25:07.921195 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c7dc263c-4149-4dec-8ee6-614b64d28491","Type":"ContainerDied","Data":"98ab14585a185afece62f537694683a037f86c311b8fc8a742ce26c8ec9b0790"} Dec 12 14:25:08 crc kubenswrapper[5113]: I1212 14:25:08.929465 5113 generic.go:358] "Generic (PLEG): container finished" podID="c7dc263c-4149-4dec-8ee6-614b64d28491" containerID="83743495bd78b54c5a165713ccca92aee06e70228327fdb721c5d29039f06402" exitCode=0 Dec 12 14:25:08 crc kubenswrapper[5113]: I1212 14:25:08.929558 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c7dc263c-4149-4dec-8ee6-614b64d28491","Type":"ContainerDied","Data":"83743495bd78b54c5a165713ccca92aee06e70228327fdb721c5d29039f06402"} Dec 12 14:25:09 crc kubenswrapper[5113]: I1212 14:25:09.820771 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq"] Dec 12 14:25:09 crc kubenswrapper[5113]: I1212 14:25:09.825388 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq" Dec 12 14:25:09 crc kubenswrapper[5113]: I1212 14:25:09.827206 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-np8pr\"" Dec 12 14:25:09 crc kubenswrapper[5113]: I1212 14:25:09.829045 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 12 14:25:09 crc kubenswrapper[5113]: I1212 14:25:09.829327 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 12 14:25:09 crc kubenswrapper[5113]: I1212 14:25:09.833920 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq"] Dec 12 14:25:09 crc kubenswrapper[5113]: I1212 14:25:09.936647 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c7dc263c-4149-4dec-8ee6-614b64d28491","Type":"ContainerStarted","Data":"1dd80ed490cb7c356336d4af29fd716b9ef5ad108a6a740e6362d37a28dc9115"} Dec 12 14:25:09 crc kubenswrapper[5113]: I1212 14:25:09.936786 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:25:09 crc kubenswrapper[5113]: I1212 14:25:09.969656 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvb2k\" (UniqueName: \"kubernetes.io/projected/65ec725c-e9b8-4f4c-b24b-2969063611c6-kube-api-access-tvb2k\") pod \"cert-manager-webhook-7894b5b9b4-zjxtq\" (UID: \"65ec725c-e9b8-4f4c-b24b-2969063611c6\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq" Dec 12 14:25:09 crc kubenswrapper[5113]: I1212 14:25:09.969738 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65ec725c-e9b8-4f4c-b24b-2969063611c6-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-zjxtq\" (UID: \"65ec725c-e9b8-4f4c-b24b-2969063611c6\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq" Dec 12 14:25:09 crc kubenswrapper[5113]: I1212 14:25:09.977466 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.254943759 podStartE2EDuration="20.977445966s" podCreationTimestamp="2025-12-12 14:24:49 +0000 UTC" firstStartedPulling="2025-12-12 14:24:50.545510343 +0000 UTC m=+873.380760170" lastFinishedPulling="2025-12-12 14:25:05.26801255 +0000 UTC m=+888.103262377" observedRunningTime="2025-12-12 14:25:09.973947846 +0000 UTC m=+892.809197693" watchObservedRunningTime="2025-12-12 14:25:09.977445966 +0000 UTC m=+892.812695793" Dec 12 14:25:10 crc kubenswrapper[5113]: I1212 14:25:10.071225 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65ec725c-e9b8-4f4c-b24b-2969063611c6-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-zjxtq\" (UID: \"65ec725c-e9b8-4f4c-b24b-2969063611c6\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq" Dec 12 14:25:10 crc kubenswrapper[5113]: I1212 14:25:10.071387 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tvb2k\" (UniqueName: \"kubernetes.io/projected/65ec725c-e9b8-4f4c-b24b-2969063611c6-kube-api-access-tvb2k\") pod \"cert-manager-webhook-7894b5b9b4-zjxtq\" (UID: \"65ec725c-e9b8-4f4c-b24b-2969063611c6\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq" Dec 12 14:25:10 crc kubenswrapper[5113]: I1212 14:25:10.090684 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65ec725c-e9b8-4f4c-b24b-2969063611c6-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-zjxtq\" (UID: \"65ec725c-e9b8-4f4c-b24b-2969063611c6\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq" Dec 12 14:25:10 crc kubenswrapper[5113]: I1212 14:25:10.090933 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvb2k\" (UniqueName: \"kubernetes.io/projected/65ec725c-e9b8-4f4c-b24b-2969063611c6-kube-api-access-tvb2k\") pod \"cert-manager-webhook-7894b5b9b4-zjxtq\" (UID: \"65ec725c-e9b8-4f4c-b24b-2969063611c6\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq" Dec 12 14:25:10 crc kubenswrapper[5113]: I1212 14:25:10.144297 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq" Dec 12 14:25:10 crc kubenswrapper[5113]: I1212 14:25:10.592791 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq"] Dec 12 14:25:10 crc kubenswrapper[5113]: W1212 14:25:10.605844 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65ec725c_e9b8_4f4c_b24b_2969063611c6.slice/crio-0678a9280e39c6bd5eaf186ff3fccd974aed11c0c954598f237a251afadc27a9 WatchSource:0}: Error finding container 0678a9280e39c6bd5eaf186ff3fccd974aed11c0c954598f237a251afadc27a9: Status 404 returned error can't find the container with id 0678a9280e39c6bd5eaf186ff3fccd974aed11c0c954598f237a251afadc27a9 Dec 12 14:25:10 crc kubenswrapper[5113]: I1212 14:25:10.943047 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq" event={"ID":"65ec725c-e9b8-4f4c-b24b-2969063611c6","Type":"ContainerStarted","Data":"0678a9280e39c6bd5eaf186ff3fccd974aed11c0c954598f237a251afadc27a9"} Dec 12 14:25:12 crc kubenswrapper[5113]: I1212 14:25:12.588998 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt"] Dec 12 14:25:12 crc kubenswrapper[5113]: I1212 14:25:12.593851 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt" Dec 12 14:25:12 crc kubenswrapper[5113]: I1212 14:25:12.597235 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-ktjsp\"" Dec 12 14:25:12 crc kubenswrapper[5113]: I1212 14:25:12.599288 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f7e9079-00c5-41cd-903e-6dbf34f2b799-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-c6bdt\" (UID: \"8f7e9079-00c5-41cd-903e-6dbf34f2b799\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt" Dec 12 14:25:12 crc kubenswrapper[5113]: I1212 14:25:12.599347 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d7pt\" (UniqueName: \"kubernetes.io/projected/8f7e9079-00c5-41cd-903e-6dbf34f2b799-kube-api-access-8d7pt\") pod \"cert-manager-cainjector-7dbf76d5c8-c6bdt\" (UID: \"8f7e9079-00c5-41cd-903e-6dbf34f2b799\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt" Dec 12 14:25:12 crc kubenswrapper[5113]: I1212 14:25:12.605928 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt"] Dec 12 14:25:12 crc kubenswrapper[5113]: I1212 14:25:12.699833 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f7e9079-00c5-41cd-903e-6dbf34f2b799-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-c6bdt\" (UID: \"8f7e9079-00c5-41cd-903e-6dbf34f2b799\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt" Dec 12 14:25:12 crc kubenswrapper[5113]: I1212 14:25:12.699906 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8d7pt\" (UniqueName: \"kubernetes.io/projected/8f7e9079-00c5-41cd-903e-6dbf34f2b799-kube-api-access-8d7pt\") pod \"cert-manager-cainjector-7dbf76d5c8-c6bdt\" (UID: \"8f7e9079-00c5-41cd-903e-6dbf34f2b799\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt" Dec 12 14:25:12 crc kubenswrapper[5113]: I1212 14:25:12.719203 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d7pt\" (UniqueName: \"kubernetes.io/projected/8f7e9079-00c5-41cd-903e-6dbf34f2b799-kube-api-access-8d7pt\") pod \"cert-manager-cainjector-7dbf76d5c8-c6bdt\" (UID: \"8f7e9079-00c5-41cd-903e-6dbf34f2b799\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt" Dec 12 14:25:12 crc kubenswrapper[5113]: I1212 14:25:12.723900 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f7e9079-00c5-41cd-903e-6dbf34f2b799-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-c6bdt\" (UID: \"8f7e9079-00c5-41cd-903e-6dbf34f2b799\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt" Dec 12 14:25:12 crc kubenswrapper[5113]: I1212 14:25:12.918080 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt" Dec 12 14:25:13 crc kubenswrapper[5113]: I1212 14:25:13.147861 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt"] Dec 12 14:25:13 crc kubenswrapper[5113]: W1212 14:25:13.159047 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f7e9079_00c5_41cd_903e_6dbf34f2b799.slice/crio-0f28b2d4f45f278bf97b490d0ab9ef80fa58d0b68d0f832191036f3c3caae4e5 WatchSource:0}: Error finding container 0f28b2d4f45f278bf97b490d0ab9ef80fa58d0b68d0f832191036f3c3caae4e5: Status 404 returned error can't find the container with id 0f28b2d4f45f278bf97b490d0ab9ef80fa58d0b68d0f832191036f3c3caae4e5 Dec 12 14:25:13 crc kubenswrapper[5113]: I1212 14:25:13.962387 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt" event={"ID":"8f7e9079-00c5-41cd-903e-6dbf34f2b799","Type":"ContainerStarted","Data":"0f28b2d4f45f278bf97b490d0ab9ef80fa58d0b68d0f832191036f3c3caae4e5"} Dec 12 14:25:19 crc kubenswrapper[5113]: I1212 14:25:19.894160 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hnmf9_f61630ce-4572-40eb-b245-937168ad79d4/kube-multus/0.log" Dec 12 14:25:19 crc kubenswrapper[5113]: I1212 14:25:19.894788 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hnmf9_f61630ce-4572-40eb-b245-937168ad79d4/kube-multus/0.log" Dec 12 14:25:19 crc kubenswrapper[5113]: I1212 14:25:19.908385 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:25:19 crc kubenswrapper[5113]: I1212 14:25:19.908461 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:25:20 crc kubenswrapper[5113]: I1212 14:25:20.388222 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-pnsj5"] Dec 12 14:25:20 crc kubenswrapper[5113]: I1212 14:25:20.397318 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-pnsj5"] Dec 12 14:25:20 crc kubenswrapper[5113]: I1212 14:25:20.397457 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-pnsj5" Dec 12 14:25:20 crc kubenswrapper[5113]: I1212 14:25:20.399712 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-v4kqf\"" Dec 12 14:25:20 crc kubenswrapper[5113]: I1212 14:25:20.433210 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/da4a8a40-f574-4cd1-867a-bc3b06e8206a-bound-sa-token\") pod \"cert-manager-858d87f86b-pnsj5\" (UID: \"da4a8a40-f574-4cd1-867a-bc3b06e8206a\") " pod="cert-manager/cert-manager-858d87f86b-pnsj5" Dec 12 14:25:20 crc kubenswrapper[5113]: I1212 14:25:20.433354 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j72mf\" (UniqueName: \"kubernetes.io/projected/da4a8a40-f574-4cd1-867a-bc3b06e8206a-kube-api-access-j72mf\") pod \"cert-manager-858d87f86b-pnsj5\" (UID: \"da4a8a40-f574-4cd1-867a-bc3b06e8206a\") " pod="cert-manager/cert-manager-858d87f86b-pnsj5" Dec 12 14:25:20 crc kubenswrapper[5113]: I1212 14:25:20.534769 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j72mf\" (UniqueName: \"kubernetes.io/projected/da4a8a40-f574-4cd1-867a-bc3b06e8206a-kube-api-access-j72mf\") pod \"cert-manager-858d87f86b-pnsj5\" (UID: \"da4a8a40-f574-4cd1-867a-bc3b06e8206a\") " pod="cert-manager/cert-manager-858d87f86b-pnsj5" Dec 12 14:25:20 crc kubenswrapper[5113]: I1212 14:25:20.534909 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/da4a8a40-f574-4cd1-867a-bc3b06e8206a-bound-sa-token\") pod \"cert-manager-858d87f86b-pnsj5\" (UID: \"da4a8a40-f574-4cd1-867a-bc3b06e8206a\") " pod="cert-manager/cert-manager-858d87f86b-pnsj5" Dec 12 14:25:20 crc kubenswrapper[5113]: I1212 14:25:20.560291 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j72mf\" (UniqueName: \"kubernetes.io/projected/da4a8a40-f574-4cd1-867a-bc3b06e8206a-kube-api-access-j72mf\") pod \"cert-manager-858d87f86b-pnsj5\" (UID: \"da4a8a40-f574-4cd1-867a-bc3b06e8206a\") " pod="cert-manager/cert-manager-858d87f86b-pnsj5" Dec 12 14:25:20 crc kubenswrapper[5113]: I1212 14:25:20.561257 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/da4a8a40-f574-4cd1-867a-bc3b06e8206a-bound-sa-token\") pod \"cert-manager-858d87f86b-pnsj5\" (UID: \"da4a8a40-f574-4cd1-867a-bc3b06e8206a\") " pod="cert-manager/cert-manager-858d87f86b-pnsj5" Dec 12 14:25:20 crc kubenswrapper[5113]: I1212 14:25:20.857327 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-pnsj5" Dec 12 14:25:20 crc kubenswrapper[5113]: I1212 14:25:20.902537 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:25:20 crc kubenswrapper[5113]: I1212 14:25:20.902613 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:25:21 crc kubenswrapper[5113]: I1212 14:25:21.023226 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt" event={"ID":"8f7e9079-00c5-41cd-903e-6dbf34f2b799","Type":"ContainerStarted","Data":"1b05e121ff87c3cf2b22964fae9faa1a031b40b79b5e3b94059a2a83af295ef4"} Dec 12 14:25:21 crc kubenswrapper[5113]: I1212 14:25:21.038339 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq" event={"ID":"65ec725c-e9b8-4f4c-b24b-2969063611c6","Type":"ContainerStarted","Data":"b115ae6dd95415f82f4833bb7259ba8475c90468d9718404e1ae52b874a18942"} Dec 12 14:25:21 crc kubenswrapper[5113]: I1212 14:25:21.038911 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq" Dec 12 14:25:21 crc kubenswrapper[5113]: I1212 14:25:21.098072 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="c7dc263c-4149-4dec-8ee6-614b64d28491" containerName="elasticsearch" probeResult="failure" output=< Dec 12 14:25:21 crc kubenswrapper[5113]: {"timestamp": "2025-12-12T14:25:21+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 14:25:21 crc kubenswrapper[5113]: > Dec 12 14:25:21 crc kubenswrapper[5113]: I1212 14:25:21.114294 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq" podStartSLOduration=2.336990284 podStartE2EDuration="12.114272759s" podCreationTimestamp="2025-12-12 14:25:09 +0000 UTC" firstStartedPulling="2025-12-12 14:25:10.608549447 +0000 UTC m=+893.443799274" lastFinishedPulling="2025-12-12 14:25:20.385831922 +0000 UTC m=+903.221081749" observedRunningTime="2025-12-12 14:25:21.100700862 +0000 UTC m=+903.935950709" watchObservedRunningTime="2025-12-12 14:25:21.114272759 +0000 UTC m=+903.949522586" Dec 12 14:25:21 crc kubenswrapper[5113]: I1212 14:25:21.115063 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-c6bdt" podStartSLOduration=1.953149177 podStartE2EDuration="9.115057313s" podCreationTimestamp="2025-12-12 14:25:12 +0000 UTC" firstStartedPulling="2025-12-12 14:25:13.164345019 +0000 UTC m=+895.999594846" lastFinishedPulling="2025-12-12 14:25:20.326253155 +0000 UTC m=+903.161502982" observedRunningTime="2025-12-12 14:25:21.056726796 +0000 UTC m=+903.891976633" watchObservedRunningTime="2025-12-12 14:25:21.115057313 +0000 UTC m=+903.950307140" Dec 12 14:25:21 crc kubenswrapper[5113]: I1212 14:25:21.407512 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-pnsj5"] Dec 12 14:25:21 crc kubenswrapper[5113]: W1212 14:25:21.412891 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda4a8a40_f574_4cd1_867a_bc3b06e8206a.slice/crio-37f4f9086fa078e604d39a1d40bc897c5bd77763793811629e47cac09d15868e WatchSource:0}: Error finding container 37f4f9086fa078e604d39a1d40bc897c5bd77763793811629e47cac09d15868e: Status 404 returned error can't find the container with id 37f4f9086fa078e604d39a1d40bc897c5bd77763793811629e47cac09d15868e Dec 12 14:25:22 crc kubenswrapper[5113]: I1212 14:25:22.043845 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-pnsj5" event={"ID":"da4a8a40-f574-4cd1-867a-bc3b06e8206a","Type":"ContainerStarted","Data":"37f4f9086fa078e604d39a1d40bc897c5bd77763793811629e47cac09d15868e"} Dec 12 14:25:23 crc kubenswrapper[5113]: I1212 14:25:23.051598 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-pnsj5" event={"ID":"da4a8a40-f574-4cd1-867a-bc3b06e8206a","Type":"ContainerStarted","Data":"67b64693a3b43737582020dba7562cfc44950369dcb6029eb44d9fd6fe0a56fc"} Dec 12 14:25:23 crc kubenswrapper[5113]: I1212 14:25:23.068121 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-pnsj5" podStartSLOduration=3.068101829 podStartE2EDuration="3.068101829s" podCreationTimestamp="2025-12-12 14:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 14:25:23.065103154 +0000 UTC m=+905.900353001" watchObservedRunningTime="2025-12-12 14:25:23.068101829 +0000 UTC m=+905.903351656" Dec 12 14:25:26 crc kubenswrapper[5113]: I1212 14:25:26.502644 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 14:25:28 crc kubenswrapper[5113]: I1212 14:25:28.055515 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zjxtq" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.424857 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.453968 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.454363 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.458956 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.459682 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-sys-config\"" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.460482 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-global-ca\"" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.461487 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-ca\"" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.461733 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-6fddz\"" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.596828 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnhj5\" (UniqueName: \"kubernetes.io/projected/cbfff0bc-6487-44d5-98c5-c323e1ba3331-kube-api-access-tnhj5\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.596880 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbfff0bc-6487-44d5-98c5-c323e1ba3331-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.596911 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.597314 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.597444 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.597638 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-builder-dockercfg-6fddz-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.597878 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.597934 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbfff0bc-6487-44d5-98c5-c323e1ba3331-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.597967 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.598006 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-builder-dockercfg-6fddz-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.598036 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.598103 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.598141 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.699462 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.699534 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbfff0bc-6487-44d5-98c5-c323e1ba3331-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.699563 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.699590 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-builder-dockercfg-6fddz-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.699614 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.699644 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.699667 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.699741 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tnhj5\" (UniqueName: \"kubernetes.io/projected/cbfff0bc-6487-44d5-98c5-c323e1ba3331-kube-api-access-tnhj5\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.699772 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbfff0bc-6487-44d5-98c5-c323e1ba3331-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.699808 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.699842 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.699862 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.699892 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-builder-dockercfg-6fddz-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.701177 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbfff0bc-6487-44d5-98c5-c323e1ba3331-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.701336 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.701841 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbfff0bc-6487-44d5-98c5-c323e1ba3331-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.701989 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.702146 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.702407 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.702537 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.702863 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.702947 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.707085 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-builder-dockercfg-6fddz-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.708370 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.715615 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-builder-dockercfg-6fddz-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.720670 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnhj5\" (UniqueName: \"kubernetes.io/projected/cbfff0bc-6487-44d5-98c5-c323e1ba3331-kube-api-access-tnhj5\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:40 crc kubenswrapper[5113]: I1212 14:25:40.780470 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:41 crc kubenswrapper[5113]: I1212 14:25:41.294704 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 14:25:42 crc kubenswrapper[5113]: I1212 14:25:42.179557 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cbfff0bc-6487-44d5-98c5-c323e1ba3331","Type":"ContainerStarted","Data":"3f670b0bcaaf37a1cb160bfa206d75f8ceea08ac1c017ae09493dcc2401ae21e"} Dec 12 14:25:50 crc kubenswrapper[5113]: I1212 14:25:50.233512 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cbfff0bc-6487-44d5-98c5-c323e1ba3331","Type":"ContainerStarted","Data":"4ac5b3abdf08eeed2cf565ca37f9e68f277dba101cf4f0cd0a16bd4163d70005"} Dec 12 14:25:50 crc kubenswrapper[5113]: I1212 14:25:50.297072 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59988: no serving certificate available for the kubelet" Dec 12 14:25:50 crc kubenswrapper[5113]: I1212 14:25:50.901815 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:25:50 crc kubenswrapper[5113]: I1212 14:25:50.901927 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:25:50 crc kubenswrapper[5113]: I1212 14:25:50.901994 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:25:51 crc kubenswrapper[5113]: I1212 14:25:51.244069 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"089ae0b96ee1d17b3fab45c8858e415f05531dedd2d6cdd703124f47bb96a0d5"} pod="openshift-machine-config-operator/machine-config-daemon-5dn52" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:25:51 crc kubenswrapper[5113]: I1212 14:25:51.244270 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" containerID="cri-o://089ae0b96ee1d17b3fab45c8858e415f05531dedd2d6cdd703124f47bb96a0d5" gracePeriod=600 Dec 12 14:25:51 crc kubenswrapper[5113]: I1212 14:25:51.332817 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 14:25:52 crc kubenswrapper[5113]: I1212 14:25:52.255856 5113 generic.go:358] "Generic (PLEG): container finished" podID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerID="089ae0b96ee1d17b3fab45c8858e415f05531dedd2d6cdd703124f47bb96a0d5" exitCode=0 Dec 12 14:25:52 crc kubenswrapper[5113]: I1212 14:25:52.255932 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerDied","Data":"089ae0b96ee1d17b3fab45c8858e415f05531dedd2d6cdd703124f47bb96a0d5"} Dec 12 14:25:52 crc kubenswrapper[5113]: I1212 14:25:52.256651 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerStarted","Data":"4cc313e220970e3a212d1df664bb6f7cf15eb74a44da11eedd618da00bc982af"} Dec 12 14:25:52 crc kubenswrapper[5113]: I1212 14:25:52.256701 5113 scope.go:117] "RemoveContainer" containerID="3e968de37b04629c3ca728af2c0b097332111db0ae21e944a78534845c463d37" Dec 12 14:25:52 crc kubenswrapper[5113]: I1212 14:25:52.256965 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-1-build" podUID="cbfff0bc-6487-44d5-98c5-c323e1ba3331" containerName="git-clone" containerID="cri-o://4ac5b3abdf08eeed2cf565ca37f9e68f277dba101cf4f0cd0a16bd4163d70005" gracePeriod=30 Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.265571 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_cbfff0bc-6487-44d5-98c5-c323e1ba3331/git-clone/0.log" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.265871 5113 generic.go:358] "Generic (PLEG): container finished" podID="cbfff0bc-6487-44d5-98c5-c323e1ba3331" containerID="4ac5b3abdf08eeed2cf565ca37f9e68f277dba101cf4f0cd0a16bd4163d70005" exitCode=1 Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.265989 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cbfff0bc-6487-44d5-98c5-c323e1ba3331","Type":"ContainerDied","Data":"4ac5b3abdf08eeed2cf565ca37f9e68f277dba101cf4f0cd0a16bd4163d70005"} Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.311057 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_cbfff0bc-6487-44d5-98c5-c323e1ba3331/git-clone/0.log" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.311154 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.426672 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbfff0bc-6487-44d5-98c5-c323e1ba3331-node-pullsecrets\") pod \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.426780 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbfff0bc-6487-44d5-98c5-c323e1ba3331-buildcachedir\") pod \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.426839 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-blob-cache\") pod \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.426886 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-system-configs\") pod \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.426978 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-proxy-ca-bundles\") pod \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.427009 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-container-storage-run\") pod \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.427049 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-container-storage-root\") pod \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.427083 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-builder-dockercfg-6fddz-push\") pod \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.427216 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-ca-bundles\") pod \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.427263 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-builder-dockercfg-6fddz-pull\") pod \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.427320 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnhj5\" (UniqueName: \"kubernetes.io/projected/cbfff0bc-6487-44d5-98c5-c323e1ba3331-kube-api-access-tnhj5\") pod \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.427360 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-buildworkdir\") pod \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.427397 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\" (UID: \"cbfff0bc-6487-44d5-98c5-c323e1ba3331\") " Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.427983 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cbfff0bc-6487-44d5-98c5-c323e1ba3331" (UID: "cbfff0bc-6487-44d5-98c5-c323e1ba3331"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.428040 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbfff0bc-6487-44d5-98c5-c323e1ba3331-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cbfff0bc-6487-44d5-98c5-c323e1ba3331" (UID: "cbfff0bc-6487-44d5-98c5-c323e1ba3331"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.428062 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbfff0bc-6487-44d5-98c5-c323e1ba3331-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cbfff0bc-6487-44d5-98c5-c323e1ba3331" (UID: "cbfff0bc-6487-44d5-98c5-c323e1ba3331"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.428296 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cbfff0bc-6487-44d5-98c5-c323e1ba3331" (UID: "cbfff0bc-6487-44d5-98c5-c323e1ba3331"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.429015 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cbfff0bc-6487-44d5-98c5-c323e1ba3331" (UID: "cbfff0bc-6487-44d5-98c5-c323e1ba3331"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.429048 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cbfff0bc-6487-44d5-98c5-c323e1ba3331" (UID: "cbfff0bc-6487-44d5-98c5-c323e1ba3331"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.429672 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cbfff0bc-6487-44d5-98c5-c323e1ba3331" (UID: "cbfff0bc-6487-44d5-98c5-c323e1ba3331"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.430318 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cbfff0bc-6487-44d5-98c5-c323e1ba3331" (UID: "cbfff0bc-6487-44d5-98c5-c323e1ba3331"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.430550 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cbfff0bc-6487-44d5-98c5-c323e1ba3331" (UID: "cbfff0bc-6487-44d5-98c5-c323e1ba3331"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.434747 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-builder-dockercfg-6fddz-push" (OuterVolumeSpecName: "builder-dockercfg-6fddz-push") pod "cbfff0bc-6487-44d5-98c5-c323e1ba3331" (UID: "cbfff0bc-6487-44d5-98c5-c323e1ba3331"). InnerVolumeSpecName "builder-dockercfg-6fddz-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.435075 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-builder-dockercfg-6fddz-pull" (OuterVolumeSpecName: "builder-dockercfg-6fddz-pull") pod "cbfff0bc-6487-44d5-98c5-c323e1ba3331" (UID: "cbfff0bc-6487-44d5-98c5-c323e1ba3331"). InnerVolumeSpecName "builder-dockercfg-6fddz-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.435615 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "cbfff0bc-6487-44d5-98c5-c323e1ba3331" (UID: "cbfff0bc-6487-44d5-98c5-c323e1ba3331"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.436312 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbfff0bc-6487-44d5-98c5-c323e1ba3331-kube-api-access-tnhj5" (OuterVolumeSpecName: "kube-api-access-tnhj5") pod "cbfff0bc-6487-44d5-98c5-c323e1ba3331" (UID: "cbfff0bc-6487-44d5-98c5-c323e1ba3331"). InnerVolumeSpecName "kube-api-access-tnhj5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.529814 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbfff0bc-6487-44d5-98c5-c323e1ba3331-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.529840 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cbfff0bc-6487-44d5-98c5-c323e1ba3331-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.529849 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.529858 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.529868 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.529891 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.529899 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.529907 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-builder-dockercfg-6fddz-push\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.529915 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbfff0bc-6487-44d5-98c5-c323e1ba3331-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.529923 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-builder-dockercfg-6fddz-pull\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.529931 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tnhj5\" (UniqueName: \"kubernetes.io/projected/cbfff0bc-6487-44d5-98c5-c323e1ba3331-kube-api-access-tnhj5\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.529938 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cbfff0bc-6487-44d5-98c5-c323e1ba3331-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:53 crc kubenswrapper[5113]: I1212 14:25:53.529946 5113 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cbfff0bc-6487-44d5-98c5-c323e1ba3331-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:25:54 crc kubenswrapper[5113]: I1212 14:25:54.278343 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_cbfff0bc-6487-44d5-98c5-c323e1ba3331/git-clone/0.log" Dec 12 14:25:54 crc kubenswrapper[5113]: I1212 14:25:54.278997 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cbfff0bc-6487-44d5-98c5-c323e1ba3331","Type":"ContainerDied","Data":"3f670b0bcaaf37a1cb160bfa206d75f8ceea08ac1c017ae09493dcc2401ae21e"} Dec 12 14:25:54 crc kubenswrapper[5113]: I1212 14:25:54.279137 5113 scope.go:117] "RemoveContainer" containerID="4ac5b3abdf08eeed2cf565ca37f9e68f277dba101cf4f0cd0a16bd4163d70005" Dec 12 14:25:54 crc kubenswrapper[5113]: I1212 14:25:54.279111 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 14:25:54 crc kubenswrapper[5113]: I1212 14:25:54.322265 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 14:25:54 crc kubenswrapper[5113]: I1212 14:25:54.332539 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 14:25:55 crc kubenswrapper[5113]: I1212 14:25:55.495337 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbfff0bc-6487-44d5-98c5-c323e1ba3331" path="/var/lib/kubelet/pods/cbfff0bc-6487-44d5-98c5-c323e1ba3331/volumes" Dec 12 14:26:03 crc kubenswrapper[5113]: I1212 14:26:03.054685 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 14:26:03 crc kubenswrapper[5113]: I1212 14:26:03.056537 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbfff0bc-6487-44d5-98c5-c323e1ba3331" containerName="git-clone" Dec 12 14:26:03 crc kubenswrapper[5113]: I1212 14:26:03.056555 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbfff0bc-6487-44d5-98c5-c323e1ba3331" containerName="git-clone" Dec 12 14:26:03 crc kubenswrapper[5113]: I1212 14:26:03.057317 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="cbfff0bc-6487-44d5-98c5-c323e1ba3331" containerName="git-clone" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.350021 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.351374 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.355041 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.355106 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-6fddz\"" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.357811 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-global-ca\"" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.358456 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-sys-config\"" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.361409 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-ca\"" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.391864 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.391911 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.391936 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.391974 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.392026 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-builder-dockercfg-6fddz-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.392065 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.392235 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/65bd65ae-b1a7-4417-a743-500b8117a4eb-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.392307 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65bd65ae-b1a7-4417-a743-500b8117a4eb-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.392404 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.392478 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.392500 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwppc\" (UniqueName: \"kubernetes.io/projected/65bd65ae-b1a7-4417-a743-500b8117a4eb-kube-api-access-vwppc\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.392539 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.392598 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-builder-dockercfg-6fddz-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.493728 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-builder-dockercfg-6fddz-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.493817 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.494066 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.494220 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.494286 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.494345 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-builder-dockercfg-6fddz-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.494596 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.494671 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.494694 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.494938 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.495167 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/65bd65ae-b1a7-4417-a743-500b8117a4eb-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.495243 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65bd65ae-b1a7-4417-a743-500b8117a4eb-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.495303 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.495470 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/65bd65ae-b1a7-4417-a743-500b8117a4eb-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.495588 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65bd65ae-b1a7-4417-a743-500b8117a4eb-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.495887 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.496535 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.496579 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vwppc\" (UniqueName: \"kubernetes.io/projected/65bd65ae-b1a7-4417-a743-500b8117a4eb-kube-api-access-vwppc\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.496627 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.496663 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.497064 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.497342 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.503254 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.503769 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-builder-dockercfg-6fddz-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.510533 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-builder-dockercfg-6fddz-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.520049 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwppc\" (UniqueName: \"kubernetes.io/projected/65bd65ae-b1a7-4417-a743-500b8117a4eb-kube-api-access-vwppc\") pod \"service-telemetry-framework-index-2-build\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.673959 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:04 crc kubenswrapper[5113]: I1212 14:26:04.930128 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 14:26:04 crc kubenswrapper[5113]: W1212 14:26:04.935452 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65bd65ae_b1a7_4417_a743_500b8117a4eb.slice/crio-9062ac69f8dabd6341c6b91ff1abb96ffe81c5037b971865e416e18f618deaca WatchSource:0}: Error finding container 9062ac69f8dabd6341c6b91ff1abb96ffe81c5037b971865e416e18f618deaca: Status 404 returned error can't find the container with id 9062ac69f8dabd6341c6b91ff1abb96ffe81c5037b971865e416e18f618deaca Dec 12 14:26:05 crc kubenswrapper[5113]: I1212 14:26:05.018240 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"65bd65ae-b1a7-4417-a743-500b8117a4eb","Type":"ContainerStarted","Data":"9062ac69f8dabd6341c6b91ff1abb96ffe81c5037b971865e416e18f618deaca"} Dec 12 14:26:06 crc kubenswrapper[5113]: I1212 14:26:06.030067 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"65bd65ae-b1a7-4417-a743-500b8117a4eb","Type":"ContainerStarted","Data":"589dfcbb542078568e0e16501da62699bcdc6c63da12d558c4c9579a35927961"} Dec 12 14:26:06 crc kubenswrapper[5113]: I1212 14:26:06.097473 5113 ???:1] "http: TLS handshake error from 192.168.126.11:55094: no serving certificate available for the kubelet" Dec 12 14:26:07 crc kubenswrapper[5113]: I1212 14:26:07.134357 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.051359 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-2-build" podUID="65bd65ae-b1a7-4417-a743-500b8117a4eb" containerName="git-clone" containerID="cri-o://589dfcbb542078568e0e16501da62699bcdc6c63da12d558c4c9579a35927961" gracePeriod=30 Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.499913 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_65bd65ae-b1a7-4417-a743-500b8117a4eb/git-clone/0.log" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.500250 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.556180 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-container-storage-run\") pod \"65bd65ae-b1a7-4417-a743-500b8117a4eb\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.556261 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-buildworkdir\") pod \"65bd65ae-b1a7-4417-a743-500b8117a4eb\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.556344 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-builder-dockercfg-6fddz-pull\") pod \"65bd65ae-b1a7-4417-a743-500b8117a4eb\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.556476 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-ca-bundles\") pod \"65bd65ae-b1a7-4417-a743-500b8117a4eb\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.556558 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-system-configs\") pod \"65bd65ae-b1a7-4417-a743-500b8117a4eb\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.556632 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwppc\" (UniqueName: \"kubernetes.io/projected/65bd65ae-b1a7-4417-a743-500b8117a4eb-kube-api-access-vwppc\") pod \"65bd65ae-b1a7-4417-a743-500b8117a4eb\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.556672 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-blob-cache\") pod \"65bd65ae-b1a7-4417-a743-500b8117a4eb\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.556739 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/65bd65ae-b1a7-4417-a743-500b8117a4eb-buildcachedir\") pod \"65bd65ae-b1a7-4417-a743-500b8117a4eb\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.556809 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-container-storage-root\") pod \"65bd65ae-b1a7-4417-a743-500b8117a4eb\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.556868 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-proxy-ca-bundles\") pod \"65bd65ae-b1a7-4417-a743-500b8117a4eb\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.556951 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-builder-dockercfg-6fddz-push\") pod \"65bd65ae-b1a7-4417-a743-500b8117a4eb\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.557013 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"65bd65ae-b1a7-4417-a743-500b8117a4eb\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.557032 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65bd65ae-b1a7-4417-a743-500b8117a4eb-node-pullsecrets\") pod \"65bd65ae-b1a7-4417-a743-500b8117a4eb\" (UID: \"65bd65ae-b1a7-4417-a743-500b8117a4eb\") " Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.557683 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "65bd65ae-b1a7-4417-a743-500b8117a4eb" (UID: "65bd65ae-b1a7-4417-a743-500b8117a4eb"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.558090 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "65bd65ae-b1a7-4417-a743-500b8117a4eb" (UID: "65bd65ae-b1a7-4417-a743-500b8117a4eb"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.558200 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65bd65ae-b1a7-4417-a743-500b8117a4eb-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "65bd65ae-b1a7-4417-a743-500b8117a4eb" (UID: "65bd65ae-b1a7-4417-a743-500b8117a4eb"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.558284 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "65bd65ae-b1a7-4417-a743-500b8117a4eb" (UID: "65bd65ae-b1a7-4417-a743-500b8117a4eb"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.558425 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65bd65ae-b1a7-4417-a743-500b8117a4eb-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "65bd65ae-b1a7-4417-a743-500b8117a4eb" (UID: "65bd65ae-b1a7-4417-a743-500b8117a4eb"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.559390 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "65bd65ae-b1a7-4417-a743-500b8117a4eb" (UID: "65bd65ae-b1a7-4417-a743-500b8117a4eb"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.560316 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "65bd65ae-b1a7-4417-a743-500b8117a4eb" (UID: "65bd65ae-b1a7-4417-a743-500b8117a4eb"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.560333 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "65bd65ae-b1a7-4417-a743-500b8117a4eb" (UID: "65bd65ae-b1a7-4417-a743-500b8117a4eb"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.560731 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "65bd65ae-b1a7-4417-a743-500b8117a4eb" (UID: "65bd65ae-b1a7-4417-a743-500b8117a4eb"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.565074 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "65bd65ae-b1a7-4417-a743-500b8117a4eb" (UID: "65bd65ae-b1a7-4417-a743-500b8117a4eb"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.565159 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65bd65ae-b1a7-4417-a743-500b8117a4eb-kube-api-access-vwppc" (OuterVolumeSpecName: "kube-api-access-vwppc") pod "65bd65ae-b1a7-4417-a743-500b8117a4eb" (UID: "65bd65ae-b1a7-4417-a743-500b8117a4eb"). InnerVolumeSpecName "kube-api-access-vwppc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.566329 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-builder-dockercfg-6fddz-pull" (OuterVolumeSpecName: "builder-dockercfg-6fddz-pull") pod "65bd65ae-b1a7-4417-a743-500b8117a4eb" (UID: "65bd65ae-b1a7-4417-a743-500b8117a4eb"). InnerVolumeSpecName "builder-dockercfg-6fddz-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.566529 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-builder-dockercfg-6fddz-push" (OuterVolumeSpecName: "builder-dockercfg-6fddz-push") pod "65bd65ae-b1a7-4417-a743-500b8117a4eb" (UID: "65bd65ae-b1a7-4417-a743-500b8117a4eb"). InnerVolumeSpecName "builder-dockercfg-6fddz-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.658719 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-builder-dockercfg-6fddz-pull\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.658771 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.658780 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.658788 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vwppc\" (UniqueName: \"kubernetes.io/projected/65bd65ae-b1a7-4417-a743-500b8117a4eb-kube-api-access-vwppc\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.658797 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.658805 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/65bd65ae-b1a7-4417-a743-500b8117a4eb-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.658813 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.658822 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65bd65ae-b1a7-4417-a743-500b8117a4eb-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.658831 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-builder-dockercfg-6fddz-push\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.658842 5113 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/65bd65ae-b1a7-4417-a743-500b8117a4eb-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.658852 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65bd65ae-b1a7-4417-a743-500b8117a4eb-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.658861 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:08 crc kubenswrapper[5113]: I1212 14:26:08.658870 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/65bd65ae-b1a7-4417-a743-500b8117a4eb-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:09 crc kubenswrapper[5113]: I1212 14:26:09.060597 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_65bd65ae-b1a7-4417-a743-500b8117a4eb/git-clone/0.log" Dec 12 14:26:09 crc kubenswrapper[5113]: I1212 14:26:09.060652 5113 generic.go:358] "Generic (PLEG): container finished" podID="65bd65ae-b1a7-4417-a743-500b8117a4eb" containerID="589dfcbb542078568e0e16501da62699bcdc6c63da12d558c4c9579a35927961" exitCode=1 Dec 12 14:26:09 crc kubenswrapper[5113]: I1212 14:26:09.060729 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"65bd65ae-b1a7-4417-a743-500b8117a4eb","Type":"ContainerDied","Data":"589dfcbb542078568e0e16501da62699bcdc6c63da12d558c4c9579a35927961"} Dec 12 14:26:09 crc kubenswrapper[5113]: I1212 14:26:09.060745 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 14:26:09 crc kubenswrapper[5113]: I1212 14:26:09.060760 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"65bd65ae-b1a7-4417-a743-500b8117a4eb","Type":"ContainerDied","Data":"9062ac69f8dabd6341c6b91ff1abb96ffe81c5037b971865e416e18f618deaca"} Dec 12 14:26:09 crc kubenswrapper[5113]: I1212 14:26:09.060778 5113 scope.go:117] "RemoveContainer" containerID="589dfcbb542078568e0e16501da62699bcdc6c63da12d558c4c9579a35927961" Dec 12 14:26:09 crc kubenswrapper[5113]: I1212 14:26:09.097896 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 14:26:09 crc kubenswrapper[5113]: I1212 14:26:09.101474 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 14:26:09 crc kubenswrapper[5113]: I1212 14:26:09.107301 5113 scope.go:117] "RemoveContainer" containerID="589dfcbb542078568e0e16501da62699bcdc6c63da12d558c4c9579a35927961" Dec 12 14:26:09 crc kubenswrapper[5113]: E1212 14:26:09.107896 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"589dfcbb542078568e0e16501da62699bcdc6c63da12d558c4c9579a35927961\": container with ID starting with 589dfcbb542078568e0e16501da62699bcdc6c63da12d558c4c9579a35927961 not found: ID does not exist" containerID="589dfcbb542078568e0e16501da62699bcdc6c63da12d558c4c9579a35927961" Dec 12 14:26:09 crc kubenswrapper[5113]: I1212 14:26:09.107931 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"589dfcbb542078568e0e16501da62699bcdc6c63da12d558c4c9579a35927961"} err="failed to get container status \"589dfcbb542078568e0e16501da62699bcdc6c63da12d558c4c9579a35927961\": rpc error: code = NotFound desc = could not find container \"589dfcbb542078568e0e16501da62699bcdc6c63da12d558c4c9579a35927961\": container with ID starting with 589dfcbb542078568e0e16501da62699bcdc6c63da12d558c4c9579a35927961 not found: ID does not exist" Dec 12 14:26:09 crc kubenswrapper[5113]: I1212 14:26:09.493103 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65bd65ae-b1a7-4417-a743-500b8117a4eb" path="/var/lib/kubelet/pods/65bd65ae-b1a7-4417-a743-500b8117a4eb/volumes" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.601372 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.603053 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="65bd65ae-b1a7-4417-a743-500b8117a4eb" containerName="git-clone" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.603079 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="65bd65ae-b1a7-4417-a743-500b8117a4eb" containerName="git-clone" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.603359 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="65bd65ae-b1a7-4417-a743-500b8117a4eb" containerName="git-clone" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.908021 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.908249 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.910742 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-ca\"" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.911176 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-6fddz\"" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.911972 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-sys-config\"" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.912494 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-global-ca\"" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.914124 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.925407 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-builder-dockercfg-6fddz-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.925458 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.925544 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.925603 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.925639 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.925737 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbv6z\" (UniqueName: \"kubernetes.io/projected/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-kube-api-access-fbv6z\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.925809 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.925922 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.925983 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.926026 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.926090 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.926164 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:18 crc kubenswrapper[5113]: I1212 14:26:18.926217 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-builder-dockercfg-6fddz-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.027714 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.028237 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.028298 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-builder-dockercfg-6fddz-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.028382 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-builder-dockercfg-6fddz-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.028419 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.028464 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.028506 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.028543 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.028614 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fbv6z\" (UniqueName: \"kubernetes.io/projected/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-kube-api-access-fbv6z\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.028655 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.028710 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.028767 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.028806 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.028919 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.029180 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.029539 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.029680 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.029939 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.030024 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.030251 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.030283 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.030480 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.036130 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.040302 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-builder-dockercfg-6fddz-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.040553 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-builder-dockercfg-6fddz-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.047255 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbv6z\" (UniqueName: \"kubernetes.io/projected/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-kube-api-access-fbv6z\") pod \"service-telemetry-framework-index-3-build\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.225084 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:19 crc kubenswrapper[5113]: I1212 14:26:19.433635 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 14:26:19 crc kubenswrapper[5113]: W1212 14:26:19.438651 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ed2fbdc_31fe_42c4_af81_8e0969f3ac3b.slice/crio-b5544d7a6652c8ca3db7c4d5cbfb56447274a597290edc42d7a122ab6cdbc63c WatchSource:0}: Error finding container b5544d7a6652c8ca3db7c4d5cbfb56447274a597290edc42d7a122ab6cdbc63c: Status 404 returned error can't find the container with id b5544d7a6652c8ca3db7c4d5cbfb56447274a597290edc42d7a122ab6cdbc63c Dec 12 14:26:20 crc kubenswrapper[5113]: I1212 14:26:20.139918 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b","Type":"ContainerStarted","Data":"b5544d7a6652c8ca3db7c4d5cbfb56447274a597290edc42d7a122ab6cdbc63c"} Dec 12 14:26:21 crc kubenswrapper[5113]: I1212 14:26:21.147742 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b","Type":"ContainerStarted","Data":"a58e723d615faddaa0b6b4fb989672355f231165a299b7b3c6964242f6c173a7"} Dec 12 14:26:21 crc kubenswrapper[5113]: I1212 14:26:21.191741 5113 ???:1] "http: TLS handshake error from 192.168.126.11:48528: no serving certificate available for the kubelet" Dec 12 14:26:22 crc kubenswrapper[5113]: I1212 14:26:22.237737 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.164428 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-3-build" podUID="0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" containerName="git-clone" containerID="cri-o://a58e723d615faddaa0b6b4fb989672355f231165a299b7b3c6964242f6c173a7" gracePeriod=30 Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.592076 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b/git-clone/0.log" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.592393 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699033 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-node-pullsecrets\") pod \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699109 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-container-storage-root\") pod \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699163 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-builder-dockercfg-6fddz-push\") pod \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699203 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-container-storage-run\") pod \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699228 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-proxy-ca-bundles\") pod \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699254 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-buildcachedir\") pod \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699301 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-buildworkdir\") pod \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699323 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-builder-dockercfg-6fddz-pull\") pod \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699308 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" (UID: "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699389 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-blob-cache\") pod \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699509 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-ca-bundles\") pod \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699599 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-system-configs\") pod \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699701 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699759 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" (UID: "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699826 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbv6z\" (UniqueName: \"kubernetes.io/projected/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-kube-api-access-fbv6z\") pod \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\" (UID: \"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b\") " Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.699957 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" (UID: "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.700149 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" (UID: "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.700382 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" (UID: "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.700475 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" (UID: "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.700535 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" (UID: "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.700583 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" (UID: "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.700913 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" (UID: "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.701592 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.701637 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.701665 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.701693 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.701718 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.702330 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.702358 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.702503 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.702532 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.705734 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" (UID: "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.705753 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-kube-api-access-fbv6z" (OuterVolumeSpecName: "kube-api-access-fbv6z") pod "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" (UID: "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b"). InnerVolumeSpecName "kube-api-access-fbv6z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.705857 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-builder-dockercfg-6fddz-push" (OuterVolumeSpecName: "builder-dockercfg-6fddz-push") pod "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" (UID: "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b"). InnerVolumeSpecName "builder-dockercfg-6fddz-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.707272 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-builder-dockercfg-6fddz-pull" (OuterVolumeSpecName: "builder-dockercfg-6fddz-pull") pod "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" (UID: "0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b"). InnerVolumeSpecName "builder-dockercfg-6fddz-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.804240 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-builder-dockercfg-6fddz-push\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.804305 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-builder-dockercfg-6fddz-pull\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.804330 5113 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:23 crc kubenswrapper[5113]: I1212 14:26:23.804389 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fbv6z\" (UniqueName: \"kubernetes.io/projected/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b-kube-api-access-fbv6z\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:24 crc kubenswrapper[5113]: I1212 14:26:24.173829 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b/git-clone/0.log" Dec 12 14:26:24 crc kubenswrapper[5113]: I1212 14:26:24.174149 5113 generic.go:358] "Generic (PLEG): container finished" podID="0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" containerID="a58e723d615faddaa0b6b4fb989672355f231165a299b7b3c6964242f6c173a7" exitCode=1 Dec 12 14:26:24 crc kubenswrapper[5113]: I1212 14:26:24.174219 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b","Type":"ContainerDied","Data":"a58e723d615faddaa0b6b4fb989672355f231165a299b7b3c6964242f6c173a7"} Dec 12 14:26:24 crc kubenswrapper[5113]: I1212 14:26:24.174252 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b","Type":"ContainerDied","Data":"b5544d7a6652c8ca3db7c4d5cbfb56447274a597290edc42d7a122ab6cdbc63c"} Dec 12 14:26:24 crc kubenswrapper[5113]: I1212 14:26:24.174275 5113 scope.go:117] "RemoveContainer" containerID="a58e723d615faddaa0b6b4fb989672355f231165a299b7b3c6964242f6c173a7" Dec 12 14:26:24 crc kubenswrapper[5113]: I1212 14:26:24.174292 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 14:26:24 crc kubenswrapper[5113]: I1212 14:26:24.277548 5113 scope.go:117] "RemoveContainer" containerID="a58e723d615faddaa0b6b4fb989672355f231165a299b7b3c6964242f6c173a7" Dec 12 14:26:24 crc kubenswrapper[5113]: E1212 14:26:24.279624 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a58e723d615faddaa0b6b4fb989672355f231165a299b7b3c6964242f6c173a7\": container with ID starting with a58e723d615faddaa0b6b4fb989672355f231165a299b7b3c6964242f6c173a7 not found: ID does not exist" containerID="a58e723d615faddaa0b6b4fb989672355f231165a299b7b3c6964242f6c173a7" Dec 12 14:26:24 crc kubenswrapper[5113]: I1212 14:26:24.279665 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a58e723d615faddaa0b6b4fb989672355f231165a299b7b3c6964242f6c173a7"} err="failed to get container status \"a58e723d615faddaa0b6b4fb989672355f231165a299b7b3c6964242f6c173a7\": rpc error: code = NotFound desc = could not find container \"a58e723d615faddaa0b6b4fb989672355f231165a299b7b3c6964242f6c173a7\": container with ID starting with a58e723d615faddaa0b6b4fb989672355f231165a299b7b3c6964242f6c173a7 not found: ID does not exist" Dec 12 14:26:24 crc kubenswrapper[5113]: I1212 14:26:24.283647 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 14:26:24 crc kubenswrapper[5113]: I1212 14:26:24.289205 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 14:26:25 crc kubenswrapper[5113]: I1212 14:26:25.492879 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" path="/var/lib/kubelet/pods/0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b/volumes" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.698312 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.700090 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" containerName="git-clone" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.700157 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" containerName="git-clone" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.700378 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ed2fbdc-31fe-42c4-af81-8e0969f3ac3b" containerName="git-clone" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.742314 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.742536 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.745823 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-sys-config\"" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.748233 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-global-ca\"" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.748548 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.748866 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-ca\"" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.749401 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-6fddz\"" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.796752 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-builder-dockercfg-6fddz-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.796823 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.796877 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq6fm\" (UniqueName: \"kubernetes.io/projected/6227c615-d27b-4449-b9b2-7ffc57a64397-kube-api-access-xq6fm\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.796901 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6227c615-d27b-4449-b9b2-7ffc57a64397-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.796922 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.796949 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.796988 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.797015 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.797041 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6227c615-d27b-4449-b9b2-7ffc57a64397-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.797105 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.797154 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.797181 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-builder-dockercfg-6fddz-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.797202 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898483 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898594 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xq6fm\" (UniqueName: \"kubernetes.io/projected/6227c615-d27b-4449-b9b2-7ffc57a64397-kube-api-access-xq6fm\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898616 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6227c615-d27b-4449-b9b2-7ffc57a64397-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898635 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898660 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898689 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898757 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6227c615-d27b-4449-b9b2-7ffc57a64397-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898788 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898805 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6227c615-d27b-4449-b9b2-7ffc57a64397-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898849 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898863 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898879 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-builder-dockercfg-6fddz-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898894 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.898921 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-builder-dockercfg-6fddz-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.899603 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.899634 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.900041 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.900107 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6227c615-d27b-4449-b9b2-7ffc57a64397-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.900233 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.900511 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.900890 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.901217 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.904690 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-builder-dockercfg-6fddz-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.904742 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-builder-dockercfg-6fddz-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.904824 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:33 crc kubenswrapper[5113]: I1212 14:26:33.916145 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq6fm\" (UniqueName: \"kubernetes.io/projected/6227c615-d27b-4449-b9b2-7ffc57a64397-kube-api-access-xq6fm\") pod \"service-telemetry-framework-index-4-build\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:34 crc kubenswrapper[5113]: I1212 14:26:34.081310 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:34 crc kubenswrapper[5113]: I1212 14:26:34.321103 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 14:26:35 crc kubenswrapper[5113]: I1212 14:26:35.262870 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"6227c615-d27b-4449-b9b2-7ffc57a64397","Type":"ContainerStarted","Data":"985475928a013b87d83303a4d46824accfb507fda0d7ca6e2863364b011f462b"} Dec 12 14:26:35 crc kubenswrapper[5113]: I1212 14:26:35.263300 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"6227c615-d27b-4449-b9b2-7ffc57a64397","Type":"ContainerStarted","Data":"3aaae8c882d263539d8573cb8c3cd9b4ea56f1a7919024990de2bae120c3ab33"} Dec 12 14:26:35 crc kubenswrapper[5113]: I1212 14:26:35.324848 5113 ???:1] "http: TLS handshake error from 192.168.126.11:42250: no serving certificate available for the kubelet" Dec 12 14:26:36 crc kubenswrapper[5113]: I1212 14:26:36.356565 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.277961 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-4-build" podUID="6227c615-d27b-4449-b9b2-7ffc57a64397" containerName="git-clone" containerID="cri-o://985475928a013b87d83303a4d46824accfb507fda0d7ca6e2863364b011f462b" gracePeriod=30 Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.801271 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-6zrhz"] Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.824667 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-6zrhz"] Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.824786 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6zrhz" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.827575 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-mn4j6\"" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.859998 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_6227c615-d27b-4449-b9b2-7ffc57a64397/git-clone/0.log" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.860073 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957351 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-container-storage-run\") pod \"6227c615-d27b-4449-b9b2-7ffc57a64397\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957392 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-container-storage-root\") pod \"6227c615-d27b-4449-b9b2-7ffc57a64397\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957434 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-buildworkdir\") pod \"6227c615-d27b-4449-b9b2-7ffc57a64397\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957462 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-proxy-ca-bundles\") pod \"6227c615-d27b-4449-b9b2-7ffc57a64397\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957514 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"6227c615-d27b-4449-b9b2-7ffc57a64397\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957531 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6227c615-d27b-4449-b9b2-7ffc57a64397-node-pullsecrets\") pod \"6227c615-d27b-4449-b9b2-7ffc57a64397\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957549 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-builder-dockercfg-6fddz-pull\") pod \"6227c615-d27b-4449-b9b2-7ffc57a64397\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957592 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-system-configs\") pod \"6227c615-d27b-4449-b9b2-7ffc57a64397\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957635 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6227c615-d27b-4449-b9b2-7ffc57a64397-buildcachedir\") pod \"6227c615-d27b-4449-b9b2-7ffc57a64397\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957662 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-ca-bundles\") pod \"6227c615-d27b-4449-b9b2-7ffc57a64397\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957702 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-build-blob-cache\") pod \"6227c615-d27b-4449-b9b2-7ffc57a64397\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957725 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq6fm\" (UniqueName: \"kubernetes.io/projected/6227c615-d27b-4449-b9b2-7ffc57a64397-kube-api-access-xq6fm\") pod \"6227c615-d27b-4449-b9b2-7ffc57a64397\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957749 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-builder-dockercfg-6fddz-push\") pod \"6227c615-d27b-4449-b9b2-7ffc57a64397\" (UID: \"6227c615-d27b-4449-b9b2-7ffc57a64397\") " Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.957753 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6227c615-d27b-4449-b9b2-7ffc57a64397" (UID: "6227c615-d27b-4449-b9b2-7ffc57a64397"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.958222 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6227c615-d27b-4449-b9b2-7ffc57a64397-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6227c615-d27b-4449-b9b2-7ffc57a64397" (UID: "6227c615-d27b-4449-b9b2-7ffc57a64397"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.958269 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6227c615-d27b-4449-b9b2-7ffc57a64397-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6227c615-d27b-4449-b9b2-7ffc57a64397" (UID: "6227c615-d27b-4449-b9b2-7ffc57a64397"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.958325 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6q5b\" (UniqueName: \"kubernetes.io/projected/4d67df52-5475-4d32-8335-9013cca2c86c-kube-api-access-h6q5b\") pod \"infrawatch-operators-6zrhz\" (UID: \"4d67df52-5475-4d32-8335-9013cca2c86c\") " pod="service-telemetry/infrawatch-operators-6zrhz" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.958338 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6227c615-d27b-4449-b9b2-7ffc57a64397" (UID: "6227c615-d27b-4449-b9b2-7ffc57a64397"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.958353 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6227c615-d27b-4449-b9b2-7ffc57a64397" (UID: "6227c615-d27b-4449-b9b2-7ffc57a64397"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.958550 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6227c615-d27b-4449-b9b2-7ffc57a64397" (UID: "6227c615-d27b-4449-b9b2-7ffc57a64397"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.958584 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.958611 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.958622 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.958633 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6227c615-d27b-4449-b9b2-7ffc57a64397-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.958642 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6227c615-d27b-4449-b9b2-7ffc57a64397-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.959220 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6227c615-d27b-4449-b9b2-7ffc57a64397" (UID: "6227c615-d27b-4449-b9b2-7ffc57a64397"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.959345 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6227c615-d27b-4449-b9b2-7ffc57a64397" (UID: "6227c615-d27b-4449-b9b2-7ffc57a64397"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.959638 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6227c615-d27b-4449-b9b2-7ffc57a64397" (UID: "6227c615-d27b-4449-b9b2-7ffc57a64397"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.966602 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-builder-dockercfg-6fddz-push" (OuterVolumeSpecName: "builder-dockercfg-6fddz-push") pod "6227c615-d27b-4449-b9b2-7ffc57a64397" (UID: "6227c615-d27b-4449-b9b2-7ffc57a64397"). InnerVolumeSpecName "builder-dockercfg-6fddz-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.966612 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-builder-dockercfg-6fddz-pull" (OuterVolumeSpecName: "builder-dockercfg-6fddz-pull") pod "6227c615-d27b-4449-b9b2-7ffc57a64397" (UID: "6227c615-d27b-4449-b9b2-7ffc57a64397"). InnerVolumeSpecName "builder-dockercfg-6fddz-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.966600 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6227c615-d27b-4449-b9b2-7ffc57a64397-kube-api-access-xq6fm" (OuterVolumeSpecName: "kube-api-access-xq6fm") pod "6227c615-d27b-4449-b9b2-7ffc57a64397" (UID: "6227c615-d27b-4449-b9b2-7ffc57a64397"). InnerVolumeSpecName "kube-api-access-xq6fm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:26:37 crc kubenswrapper[5113]: I1212 14:26:37.966656 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "6227c615-d27b-4449-b9b2-7ffc57a64397" (UID: "6227c615-d27b-4449-b9b2-7ffc57a64397"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.059432 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h6q5b\" (UniqueName: \"kubernetes.io/projected/4d67df52-5475-4d32-8335-9013cca2c86c-kube-api-access-h6q5b\") pod \"infrawatch-operators-6zrhz\" (UID: \"4d67df52-5475-4d32-8335-9013cca2c86c\") " pod="service-telemetry/infrawatch-operators-6zrhz" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.059985 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.060090 5113 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.060202 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-6fddz-pull\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-builder-dockercfg-6fddz-pull\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.060276 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.060343 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6227c615-d27b-4449-b9b2-7ffc57a64397-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.060401 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6227c615-d27b-4449-b9b2-7ffc57a64397-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.060459 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xq6fm\" (UniqueName: \"kubernetes.io/projected/6227c615-d27b-4449-b9b2-7ffc57a64397-kube-api-access-xq6fm\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.060516 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-6fddz-push\" (UniqueName: \"kubernetes.io/secret/6227c615-d27b-4449-b9b2-7ffc57a64397-builder-dockercfg-6fddz-push\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.077817 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6q5b\" (UniqueName: \"kubernetes.io/projected/4d67df52-5475-4d32-8335-9013cca2c86c-kube-api-access-h6q5b\") pod \"infrawatch-operators-6zrhz\" (UID: \"4d67df52-5475-4d32-8335-9013cca2c86c\") " pod="service-telemetry/infrawatch-operators-6zrhz" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.172699 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6zrhz" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.286467 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_6227c615-d27b-4449-b9b2-7ffc57a64397/git-clone/0.log" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.286498 5113 generic.go:358] "Generic (PLEG): container finished" podID="6227c615-d27b-4449-b9b2-7ffc57a64397" containerID="985475928a013b87d83303a4d46824accfb507fda0d7ca6e2863364b011f462b" exitCode=1 Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.286578 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"6227c615-d27b-4449-b9b2-7ffc57a64397","Type":"ContainerDied","Data":"985475928a013b87d83303a4d46824accfb507fda0d7ca6e2863364b011f462b"} Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.286601 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"6227c615-d27b-4449-b9b2-7ffc57a64397","Type":"ContainerDied","Data":"3aaae8c882d263539d8573cb8c3cd9b4ea56f1a7919024990de2bae120c3ab33"} Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.286617 5113 scope.go:117] "RemoveContainer" containerID="985475928a013b87d83303a4d46824accfb507fda0d7ca6e2863364b011f462b" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.286725 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.321384 5113 scope.go:117] "RemoveContainer" containerID="985475928a013b87d83303a4d46824accfb507fda0d7ca6e2863364b011f462b" Dec 12 14:26:38 crc kubenswrapper[5113]: E1212 14:26:38.322211 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985475928a013b87d83303a4d46824accfb507fda0d7ca6e2863364b011f462b\": container with ID starting with 985475928a013b87d83303a4d46824accfb507fda0d7ca6e2863364b011f462b not found: ID does not exist" containerID="985475928a013b87d83303a4d46824accfb507fda0d7ca6e2863364b011f462b" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.322256 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985475928a013b87d83303a4d46824accfb507fda0d7ca6e2863364b011f462b"} err="failed to get container status \"985475928a013b87d83303a4d46824accfb507fda0d7ca6e2863364b011f462b\": rpc error: code = NotFound desc = could not find container \"985475928a013b87d83303a4d46824accfb507fda0d7ca6e2863364b011f462b\": container with ID starting with 985475928a013b87d83303a4d46824accfb507fda0d7ca6e2863364b011f462b not found: ID does not exist" Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.323282 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.330790 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 14:26:38 crc kubenswrapper[5113]: I1212 14:26:38.374701 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-6zrhz"] Dec 12 14:26:38 crc kubenswrapper[5113]: W1212 14:26:38.383748 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d67df52_5475_4d32_8335_9013cca2c86c.slice/crio-5a3634f70c42d4e58f921ab8020e8ccc0c20618a9e61c63fb27f8acb0a713542 WatchSource:0}: Error finding container 5a3634f70c42d4e58f921ab8020e8ccc0c20618a9e61c63fb27f8acb0a713542: Status 404 returned error can't find the container with id 5a3634f70c42d4e58f921ab8020e8ccc0c20618a9e61c63fb27f8acb0a713542 Dec 12 14:26:38 crc kubenswrapper[5113]: E1212 14:26:38.450900 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:26:38 crc kubenswrapper[5113]: E1212 14:26:38.451313 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h6q5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-6zrhz_service-telemetry(4d67df52-5475-4d32-8335-9013cca2c86c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:26:38 crc kubenswrapper[5113]: E1212 14:26:38.452799 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6zrhz" podUID="4d67df52-5475-4d32-8335-9013cca2c86c" Dec 12 14:26:39 crc kubenswrapper[5113]: I1212 14:26:39.295950 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-6zrhz" event={"ID":"4d67df52-5475-4d32-8335-9013cca2c86c","Type":"ContainerStarted","Data":"5a3634f70c42d4e58f921ab8020e8ccc0c20618a9e61c63fb27f8acb0a713542"} Dec 12 14:26:39 crc kubenswrapper[5113]: E1212 14:26:39.296966 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6zrhz" podUID="4d67df52-5475-4d32-8335-9013cca2c86c" Dec 12 14:26:39 crc kubenswrapper[5113]: I1212 14:26:39.504112 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6227c615-d27b-4449-b9b2-7ffc57a64397" path="/var/lib/kubelet/pods/6227c615-d27b-4449-b9b2-7ffc57a64397/volumes" Dec 12 14:26:40 crc kubenswrapper[5113]: E1212 14:26:40.309097 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6zrhz" podUID="4d67df52-5475-4d32-8335-9013cca2c86c" Dec 12 14:26:42 crc kubenswrapper[5113]: I1212 14:26:42.990467 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-6zrhz"] Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.213066 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6zrhz" Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.329878 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-6zrhz" event={"ID":"4d67df52-5475-4d32-8335-9013cca2c86c","Type":"ContainerDied","Data":"5a3634f70c42d4e58f921ab8020e8ccc0c20618a9e61c63fb27f8acb0a713542"} Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.330272 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6q5b\" (UniqueName: \"kubernetes.io/projected/4d67df52-5475-4d32-8335-9013cca2c86c-kube-api-access-h6q5b\") pod \"4d67df52-5475-4d32-8335-9013cca2c86c\" (UID: \"4d67df52-5475-4d32-8335-9013cca2c86c\") " Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.329906 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6zrhz" Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.340326 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d67df52-5475-4d32-8335-9013cca2c86c-kube-api-access-h6q5b" (OuterVolumeSpecName: "kube-api-access-h6q5b") pod "4d67df52-5475-4d32-8335-9013cca2c86c" (UID: "4d67df52-5475-4d32-8335-9013cca2c86c"). InnerVolumeSpecName "kube-api-access-h6q5b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.432481 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h6q5b\" (UniqueName: \"kubernetes.io/projected/4d67df52-5475-4d32-8335-9013cca2c86c-kube-api-access-h6q5b\") on node \"crc\" DevicePath \"\"" Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.663239 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-6zrhz"] Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.667902 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-6zrhz"] Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.802523 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-hgmfl"] Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.803203 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6227c615-d27b-4449-b9b2-7ffc57a64397" containerName="git-clone" Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.803220 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="6227c615-d27b-4449-b9b2-7ffc57a64397" containerName="git-clone" Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.803370 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="6227c615-d27b-4449-b9b2-7ffc57a64397" containerName="git-clone" Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.827659 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-hgmfl"] Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.827824 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-hgmfl" Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.831144 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-mn4j6\"" Dec 12 14:26:43 crc kubenswrapper[5113]: I1212 14:26:43.940347 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dbqb\" (UniqueName: \"kubernetes.io/projected/aec662d5-147a-4efb-ac69-80a0fc01a91e-kube-api-access-2dbqb\") pod \"infrawatch-operators-hgmfl\" (UID: \"aec662d5-147a-4efb-ac69-80a0fc01a91e\") " pod="service-telemetry/infrawatch-operators-hgmfl" Dec 12 14:26:44 crc kubenswrapper[5113]: I1212 14:26:44.042545 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2dbqb\" (UniqueName: \"kubernetes.io/projected/aec662d5-147a-4efb-ac69-80a0fc01a91e-kube-api-access-2dbqb\") pod \"infrawatch-operators-hgmfl\" (UID: \"aec662d5-147a-4efb-ac69-80a0fc01a91e\") " pod="service-telemetry/infrawatch-operators-hgmfl" Dec 12 14:26:44 crc kubenswrapper[5113]: I1212 14:26:44.067004 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dbqb\" (UniqueName: \"kubernetes.io/projected/aec662d5-147a-4efb-ac69-80a0fc01a91e-kube-api-access-2dbqb\") pod \"infrawatch-operators-hgmfl\" (UID: \"aec662d5-147a-4efb-ac69-80a0fc01a91e\") " pod="service-telemetry/infrawatch-operators-hgmfl" Dec 12 14:26:44 crc kubenswrapper[5113]: I1212 14:26:44.145347 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-hgmfl" Dec 12 14:26:44 crc kubenswrapper[5113]: I1212 14:26:44.554855 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-hgmfl"] Dec 12 14:26:44 crc kubenswrapper[5113]: E1212 14:26:44.627982 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:26:44 crc kubenswrapper[5113]: E1212 14:26:44.628389 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dbqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hgmfl_service-telemetry(aec662d5-147a-4efb-ac69-80a0fc01a91e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:26:44 crc kubenswrapper[5113]: E1212 14:26:44.629595 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:26:45 crc kubenswrapper[5113]: I1212 14:26:45.343909 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-hgmfl" event={"ID":"aec662d5-147a-4efb-ac69-80a0fc01a91e","Type":"ContainerStarted","Data":"bcf64c61bb929095c8920566873cb193f4dc947d469de6606e765d1e3ffdbdc3"} Dec 12 14:26:45 crc kubenswrapper[5113]: E1212 14:26:45.344835 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:26:45 crc kubenswrapper[5113]: I1212 14:26:45.489772 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d67df52-5475-4d32-8335-9013cca2c86c" path="/var/lib/kubelet/pods/4d67df52-5475-4d32-8335-9013cca2c86c/volumes" Dec 12 14:26:46 crc kubenswrapper[5113]: E1212 14:26:46.353229 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:26:59 crc kubenswrapper[5113]: I1212 14:26:59.483946 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 14:26:59 crc kubenswrapper[5113]: E1212 14:26:59.554747 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:26:59 crc kubenswrapper[5113]: E1212 14:26:59.554949 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dbqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hgmfl_service-telemetry(aec662d5-147a-4efb-ac69-80a0fc01a91e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:26:59 crc kubenswrapper[5113]: E1212 14:26:59.556141 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:27:00 crc kubenswrapper[5113]: E1212 14:27:00.566971 5113 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 12 14:27:02 crc kubenswrapper[5113]: I1212 14:27:02.568765 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 14:27:02 crc kubenswrapper[5113]: I1212 14:27:02.577848 5113 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 14:27:02 crc kubenswrapper[5113]: I1212 14:27:02.600190 5113 ???:1] "http: TLS handshake error from 192.168.126.11:45886: no serving certificate available for the kubelet" Dec 12 14:27:02 crc kubenswrapper[5113]: I1212 14:27:02.702055 5113 ???:1] "http: TLS handshake error from 192.168.126.11:45894: no serving certificate available for the kubelet" Dec 12 14:27:02 crc kubenswrapper[5113]: I1212 14:27:02.735022 5113 ???:1] "http: TLS handshake error from 192.168.126.11:45908: no serving certificate available for the kubelet" Dec 12 14:27:02 crc kubenswrapper[5113]: I1212 14:27:02.777340 5113 ???:1] "http: TLS handshake error from 192.168.126.11:45912: no serving certificate available for the kubelet" Dec 12 14:27:02 crc kubenswrapper[5113]: I1212 14:27:02.836427 5113 ???:1] "http: TLS handshake error from 192.168.126.11:45924: no serving certificate available for the kubelet" Dec 12 14:27:02 crc kubenswrapper[5113]: I1212 14:27:02.938657 5113 ???:1] "http: TLS handshake error from 192.168.126.11:45928: no serving certificate available for the kubelet" Dec 12 14:27:03 crc kubenswrapper[5113]: I1212 14:27:03.124916 5113 ???:1] "http: TLS handshake error from 192.168.126.11:45942: no serving certificate available for the kubelet" Dec 12 14:27:03 crc kubenswrapper[5113]: I1212 14:27:03.476349 5113 ???:1] "http: TLS handshake error from 192.168.126.11:45952: no serving certificate available for the kubelet" Dec 12 14:27:04 crc kubenswrapper[5113]: I1212 14:27:04.154600 5113 ???:1] "http: TLS handshake error from 192.168.126.11:45962: no serving certificate available for the kubelet" Dec 12 14:27:05 crc kubenswrapper[5113]: I1212 14:27:05.461045 5113 ???:1] "http: TLS handshake error from 192.168.126.11:45974: no serving certificate available for the kubelet" Dec 12 14:27:08 crc kubenswrapper[5113]: I1212 14:27:08.049529 5113 ???:1] "http: TLS handshake error from 192.168.126.11:56256: no serving certificate available for the kubelet" Dec 12 14:27:12 crc kubenswrapper[5113]: E1212 14:27:12.485446 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:27:13 crc kubenswrapper[5113]: I1212 14:27:13.203271 5113 ???:1] "http: TLS handshake error from 192.168.126.11:56262: no serving certificate available for the kubelet" Dec 12 14:27:23 crc kubenswrapper[5113]: I1212 14:27:23.475656 5113 ???:1] "http: TLS handshake error from 192.168.126.11:52452: no serving certificate available for the kubelet" Dec 12 14:27:27 crc kubenswrapper[5113]: E1212 14:27:27.540193 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:27:27 crc kubenswrapper[5113]: E1212 14:27:27.540403 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dbqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hgmfl_service-telemetry(aec662d5-147a-4efb-ac69-80a0fc01a91e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:27:27 crc kubenswrapper[5113]: E1212 14:27:27.541678 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:27:38 crc kubenswrapper[5113]: E1212 14:27:38.484753 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:27:43 crc kubenswrapper[5113]: I1212 14:27:43.978385 5113 ???:1] "http: TLS handshake error from 192.168.126.11:50366: no serving certificate available for the kubelet" Dec 12 14:27:50 crc kubenswrapper[5113]: E1212 14:27:50.483817 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:28:03 crc kubenswrapper[5113]: E1212 14:28:03.484101 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:28:18 crc kubenswrapper[5113]: E1212 14:28:18.541398 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:28:18 crc kubenswrapper[5113]: E1212 14:28:18.542183 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dbqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hgmfl_service-telemetry(aec662d5-147a-4efb-ac69-80a0fc01a91e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:28:18 crc kubenswrapper[5113]: E1212 14:28:18.543599 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:28:20 crc kubenswrapper[5113]: I1212 14:28:20.901846 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:28:20 crc kubenswrapper[5113]: I1212 14:28:20.902228 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:28:24 crc kubenswrapper[5113]: I1212 14:28:24.965576 5113 ???:1] "http: TLS handshake error from 192.168.126.11:52790: no serving certificate available for the kubelet" Dec 12 14:28:33 crc kubenswrapper[5113]: E1212 14:28:33.485585 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:28:46 crc kubenswrapper[5113]: E1212 14:28:46.483552 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:28:50 crc kubenswrapper[5113]: I1212 14:28:50.901741 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:28:50 crc kubenswrapper[5113]: I1212 14:28:50.902269 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:29:00 crc kubenswrapper[5113]: E1212 14:29:00.483753 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:29:12 crc kubenswrapper[5113]: E1212 14:29:12.483380 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:29:20 crc kubenswrapper[5113]: I1212 14:29:20.901775 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:29:20 crc kubenswrapper[5113]: I1212 14:29:20.902372 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:29:20 crc kubenswrapper[5113]: I1212 14:29:20.902434 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:29:20 crc kubenswrapper[5113]: I1212 14:29:20.903275 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4cc313e220970e3a212d1df664bb6f7cf15eb74a44da11eedd618da00bc982af"} pod="openshift-machine-config-operator/machine-config-daemon-5dn52" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:29:20 crc kubenswrapper[5113]: I1212 14:29:20.903363 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" containerID="cri-o://4cc313e220970e3a212d1df664bb6f7cf15eb74a44da11eedd618da00bc982af" gracePeriod=600 Dec 12 14:29:21 crc kubenswrapper[5113]: I1212 14:29:21.441666 5113 generic.go:358] "Generic (PLEG): container finished" podID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerID="4cc313e220970e3a212d1df664bb6f7cf15eb74a44da11eedd618da00bc982af" exitCode=0 Dec 12 14:29:21 crc kubenswrapper[5113]: I1212 14:29:21.441706 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerDied","Data":"4cc313e220970e3a212d1df664bb6f7cf15eb74a44da11eedd618da00bc982af"} Dec 12 14:29:21 crc kubenswrapper[5113]: I1212 14:29:21.442002 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerStarted","Data":"fe2e870b7341741d5ea9466176047f43fbf07ffe98dd1d2f8d6d6cfa613db1c4"} Dec 12 14:29:21 crc kubenswrapper[5113]: I1212 14:29:21.442022 5113 scope.go:117] "RemoveContainer" containerID="089ae0b96ee1d17b3fab45c8858e415f05531dedd2d6cdd703124f47bb96a0d5" Dec 12 14:29:26 crc kubenswrapper[5113]: E1212 14:29:26.483091 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:29:39 crc kubenswrapper[5113]: E1212 14:29:39.548052 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:29:39 crc kubenswrapper[5113]: E1212 14:29:39.548493 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dbqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hgmfl_service-telemetry(aec662d5-147a-4efb-ac69-80a0fc01a91e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:29:39 crc kubenswrapper[5113]: E1212 14:29:39.549740 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:29:46 crc kubenswrapper[5113]: I1212 14:29:46.912210 5113 ???:1] "http: TLS handshake error from 192.168.126.11:50628: no serving certificate available for the kubelet" Dec 12 14:29:52 crc kubenswrapper[5113]: E1212 14:29:52.483980 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.158355 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq"] Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.169191 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.170887 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq"] Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.172258 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.172583 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.292638 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pc6r\" (UniqueName: \"kubernetes.io/projected/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-kube-api-access-7pc6r\") pod \"collect-profiles-29425830-28mtq\" (UID: \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.292710 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-secret-volume\") pod \"collect-profiles-29425830-28mtq\" (UID: \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.292798 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-config-volume\") pod \"collect-profiles-29425830-28mtq\" (UID: \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.394805 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7pc6r\" (UniqueName: \"kubernetes.io/projected/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-kube-api-access-7pc6r\") pod \"collect-profiles-29425830-28mtq\" (UID: \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.394874 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-secret-volume\") pod \"collect-profiles-29425830-28mtq\" (UID: \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.394915 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-config-volume\") pod \"collect-profiles-29425830-28mtq\" (UID: \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.396156 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-config-volume\") pod \"collect-profiles-29425830-28mtq\" (UID: \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.408776 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-secret-volume\") pod \"collect-profiles-29425830-28mtq\" (UID: \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.415733 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pc6r\" (UniqueName: \"kubernetes.io/projected/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-kube-api-access-7pc6r\") pod \"collect-profiles-29425830-28mtq\" (UID: \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.511191 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" Dec 12 14:30:00 crc kubenswrapper[5113]: I1212 14:30:00.922665 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq"] Dec 12 14:30:01 crc kubenswrapper[5113]: I1212 14:30:01.732137 5113 generic.go:358] "Generic (PLEG): container finished" podID="f5af8a27-01b9-4f8b-a22d-d5dce00fa917" containerID="353929c802011b97b96521d16baf5308767512f677e59f842a5dc4f5f9614f06" exitCode=0 Dec 12 14:30:01 crc kubenswrapper[5113]: I1212 14:30:01.732258 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" event={"ID":"f5af8a27-01b9-4f8b-a22d-d5dce00fa917","Type":"ContainerDied","Data":"353929c802011b97b96521d16baf5308767512f677e59f842a5dc4f5f9614f06"} Dec 12 14:30:01 crc kubenswrapper[5113]: I1212 14:30:01.732555 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" event={"ID":"f5af8a27-01b9-4f8b-a22d-d5dce00fa917","Type":"ContainerStarted","Data":"8ce2bd59e3d7811004d4dfc872c7247864b019f53b2860cf3ff0f8b1cd583b52"} Dec 12 14:30:03 crc kubenswrapper[5113]: I1212 14:30:03.043912 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" Dec 12 14:30:03 crc kubenswrapper[5113]: I1212 14:30:03.128861 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-secret-volume\") pod \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\" (UID: \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\") " Dec 12 14:30:03 crc kubenswrapper[5113]: I1212 14:30:03.129051 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-config-volume\") pod \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\" (UID: \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\") " Dec 12 14:30:03 crc kubenswrapper[5113]: I1212 14:30:03.129271 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pc6r\" (UniqueName: \"kubernetes.io/projected/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-kube-api-access-7pc6r\") pod \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\" (UID: \"f5af8a27-01b9-4f8b-a22d-d5dce00fa917\") " Dec 12 14:30:03 crc kubenswrapper[5113]: I1212 14:30:03.129930 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-config-volume" (OuterVolumeSpecName: "config-volume") pod "f5af8a27-01b9-4f8b-a22d-d5dce00fa917" (UID: "f5af8a27-01b9-4f8b-a22d-d5dce00fa917"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 14:30:03 crc kubenswrapper[5113]: I1212 14:30:03.134274 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f5af8a27-01b9-4f8b-a22d-d5dce00fa917" (UID: "f5af8a27-01b9-4f8b-a22d-d5dce00fa917"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 14:30:03 crc kubenswrapper[5113]: I1212 14:30:03.142350 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-kube-api-access-7pc6r" (OuterVolumeSpecName: "kube-api-access-7pc6r") pod "f5af8a27-01b9-4f8b-a22d-d5dce00fa917" (UID: "f5af8a27-01b9-4f8b-a22d-d5dce00fa917"). InnerVolumeSpecName "kube-api-access-7pc6r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:30:03 crc kubenswrapper[5113]: I1212 14:30:03.230840 5113 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:30:03 crc kubenswrapper[5113]: I1212 14:30:03.230886 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 14:30:03 crc kubenswrapper[5113]: I1212 14:30:03.230929 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7pc6r\" (UniqueName: \"kubernetes.io/projected/f5af8a27-01b9-4f8b-a22d-d5dce00fa917-kube-api-access-7pc6r\") on node \"crc\" DevicePath \"\"" Dec 12 14:30:03 crc kubenswrapper[5113]: I1212 14:30:03.749231 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" event={"ID":"f5af8a27-01b9-4f8b-a22d-d5dce00fa917","Type":"ContainerDied","Data":"8ce2bd59e3d7811004d4dfc872c7247864b019f53b2860cf3ff0f8b1cd583b52"} Dec 12 14:30:03 crc kubenswrapper[5113]: I1212 14:30:03.749276 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ce2bd59e3d7811004d4dfc872c7247864b019f53b2860cf3ff0f8b1cd583b52" Dec 12 14:30:03 crc kubenswrapper[5113]: I1212 14:30:03.749362 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425830-28mtq" Dec 12 14:30:04 crc kubenswrapper[5113]: E1212 14:30:04.483014 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:30:15 crc kubenswrapper[5113]: E1212 14:30:15.483880 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:30:19 crc kubenswrapper[5113]: I1212 14:30:19.997342 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hnmf9_f61630ce-4572-40eb-b245-937168ad79d4/kube-multus/0.log" Dec 12 14:30:19 crc kubenswrapper[5113]: I1212 14:30:19.997915 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hnmf9_f61630ce-4572-40eb-b245-937168ad79d4/kube-multus/0.log" Dec 12 14:30:20 crc kubenswrapper[5113]: I1212 14:30:20.004609 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:30:20 crc kubenswrapper[5113]: I1212 14:30:20.004622 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:30:26 crc kubenswrapper[5113]: E1212 14:30:26.483227 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:30:40 crc kubenswrapper[5113]: E1212 14:30:40.489647 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:30:53 crc kubenswrapper[5113]: E1212 14:30:53.484430 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:31:07 crc kubenswrapper[5113]: E1212 14:31:07.488042 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:31:22 crc kubenswrapper[5113]: E1212 14:31:22.483895 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:31:34 crc kubenswrapper[5113]: E1212 14:31:34.483737 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:31:40 crc kubenswrapper[5113]: I1212 14:31:40.852470 5113 ???:1] "http: TLS handshake error from 192.168.126.11:49916: no serving certificate available for the kubelet" Dec 12 14:31:45 crc kubenswrapper[5113]: I1212 14:31:45.633778 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-wpxz5"] Dec 12 14:31:45 crc kubenswrapper[5113]: I1212 14:31:45.634909 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f5af8a27-01b9-4f8b-a22d-d5dce00fa917" containerName="collect-profiles" Dec 12 14:31:45 crc kubenswrapper[5113]: I1212 14:31:45.635134 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5af8a27-01b9-4f8b-a22d-d5dce00fa917" containerName="collect-profiles" Dec 12 14:31:45 crc kubenswrapper[5113]: I1212 14:31:45.635341 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f5af8a27-01b9-4f8b-a22d-d5dce00fa917" containerName="collect-profiles" Dec 12 14:31:45 crc kubenswrapper[5113]: I1212 14:31:45.697628 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-wpxz5" Dec 12 14:31:45 crc kubenswrapper[5113]: I1212 14:31:45.701502 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-wpxz5"] Dec 12 14:31:45 crc kubenswrapper[5113]: I1212 14:31:45.842549 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65lbb\" (UniqueName: \"kubernetes.io/projected/208dda3c-f1f0-4b82-9f0c-12464184846e-kube-api-access-65lbb\") pod \"infrawatch-operators-wpxz5\" (UID: \"208dda3c-f1f0-4b82-9f0c-12464184846e\") " pod="service-telemetry/infrawatch-operators-wpxz5" Dec 12 14:31:45 crc kubenswrapper[5113]: I1212 14:31:45.943834 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-65lbb\" (UniqueName: \"kubernetes.io/projected/208dda3c-f1f0-4b82-9f0c-12464184846e-kube-api-access-65lbb\") pod \"infrawatch-operators-wpxz5\" (UID: \"208dda3c-f1f0-4b82-9f0c-12464184846e\") " pod="service-telemetry/infrawatch-operators-wpxz5" Dec 12 14:31:45 crc kubenswrapper[5113]: I1212 14:31:45.973879 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-65lbb\" (UniqueName: \"kubernetes.io/projected/208dda3c-f1f0-4b82-9f0c-12464184846e-kube-api-access-65lbb\") pod \"infrawatch-operators-wpxz5\" (UID: \"208dda3c-f1f0-4b82-9f0c-12464184846e\") " pod="service-telemetry/infrawatch-operators-wpxz5" Dec 12 14:31:46 crc kubenswrapper[5113]: I1212 14:31:46.017559 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-wpxz5" Dec 12 14:31:46 crc kubenswrapper[5113]: I1212 14:31:46.238272 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-wpxz5"] Dec 12 14:31:46 crc kubenswrapper[5113]: W1212 14:31:46.244895 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod208dda3c_f1f0_4b82_9f0c_12464184846e.slice/crio-d309f303173c512042d112464694a6414d20fc7a8bfbe6d8a82d273b96489d68 WatchSource:0}: Error finding container d309f303173c512042d112464694a6414d20fc7a8bfbe6d8a82d273b96489d68: Status 404 returned error can't find the container with id d309f303173c512042d112464694a6414d20fc7a8bfbe6d8a82d273b96489d68 Dec 12 14:31:46 crc kubenswrapper[5113]: E1212 14:31:46.302252 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:31:46 crc kubenswrapper[5113]: E1212 14:31:46.302525 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-65lbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wpxz5_service-telemetry(208dda3c-f1f0-4b82-9f0c-12464184846e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:31:46 crc kubenswrapper[5113]: E1212 14:31:46.303745 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:31:46 crc kubenswrapper[5113]: I1212 14:31:46.705661 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-wpxz5" event={"ID":"208dda3c-f1f0-4b82-9f0c-12464184846e","Type":"ContainerStarted","Data":"d309f303173c512042d112464694a6414d20fc7a8bfbe6d8a82d273b96489d68"} Dec 12 14:31:46 crc kubenswrapper[5113]: E1212 14:31:46.707042 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:31:47 crc kubenswrapper[5113]: E1212 14:31:47.715272 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:31:48 crc kubenswrapper[5113]: E1212 14:31:48.483959 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:31:50 crc kubenswrapper[5113]: I1212 14:31:50.901951 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:31:50 crc kubenswrapper[5113]: I1212 14:31:50.902050 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:31:58 crc kubenswrapper[5113]: E1212 14:31:58.569752 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:31:58 crc kubenswrapper[5113]: E1212 14:31:58.570383 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-65lbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wpxz5_service-telemetry(208dda3c-f1f0-4b82-9f0c-12464184846e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:31:58 crc kubenswrapper[5113]: E1212 14:31:58.571684 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:31:59 crc kubenswrapper[5113]: E1212 14:31:59.484760 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:32:12 crc kubenswrapper[5113]: I1212 14:32:12.483327 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 14:32:12 crc kubenswrapper[5113]: E1212 14:32:12.484167 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:32:13 crc kubenswrapper[5113]: E1212 14:32:13.484532 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:32:20 crc kubenswrapper[5113]: I1212 14:32:20.902266 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:32:20 crc kubenswrapper[5113]: I1212 14:32:20.903013 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:32:24 crc kubenswrapper[5113]: E1212 14:32:24.535846 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:32:24 crc kubenswrapper[5113]: E1212 14:32:24.536333 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-65lbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wpxz5_service-telemetry(208dda3c-f1f0-4b82-9f0c-12464184846e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:32:24 crc kubenswrapper[5113]: E1212 14:32:24.537545 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:32:25 crc kubenswrapper[5113]: E1212 14:32:25.576203 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:32:25 crc kubenswrapper[5113]: E1212 14:32:25.576370 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dbqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hgmfl_service-telemetry(aec662d5-147a-4efb-ac69-80a0fc01a91e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:32:25 crc kubenswrapper[5113]: E1212 14:32:25.577557 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:32:30 crc kubenswrapper[5113]: I1212 14:32:30.783849 5113 ???:1] "http: TLS handshake error from 192.168.126.11:37880: no serving certificate available for the kubelet" Dec 12 14:32:38 crc kubenswrapper[5113]: E1212 14:32:38.483360 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:32:39 crc kubenswrapper[5113]: E1212 14:32:39.484597 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:32:50 crc kubenswrapper[5113]: E1212 14:32:50.483513 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:32:50 crc kubenswrapper[5113]: I1212 14:32:50.902663 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:32:50 crc kubenswrapper[5113]: I1212 14:32:50.903066 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:32:50 crc kubenswrapper[5113]: I1212 14:32:50.903234 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:32:50 crc kubenswrapper[5113]: I1212 14:32:50.904116 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe2e870b7341741d5ea9466176047f43fbf07ffe98dd1d2f8d6d6cfa613db1c4"} pod="openshift-machine-config-operator/machine-config-daemon-5dn52" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:32:50 crc kubenswrapper[5113]: I1212 14:32:50.904288 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" containerID="cri-o://fe2e870b7341741d5ea9466176047f43fbf07ffe98dd1d2f8d6d6cfa613db1c4" gracePeriod=600 Dec 12 14:32:51 crc kubenswrapper[5113]: I1212 14:32:51.153423 5113 generic.go:358] "Generic (PLEG): container finished" podID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerID="fe2e870b7341741d5ea9466176047f43fbf07ffe98dd1d2f8d6d6cfa613db1c4" exitCode=0 Dec 12 14:32:51 crc kubenswrapper[5113]: I1212 14:32:51.153514 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerDied","Data":"fe2e870b7341741d5ea9466176047f43fbf07ffe98dd1d2f8d6d6cfa613db1c4"} Dec 12 14:32:51 crc kubenswrapper[5113]: I1212 14:32:51.153575 5113 scope.go:117] "RemoveContainer" containerID="4cc313e220970e3a212d1df664bb6f7cf15eb74a44da11eedd618da00bc982af" Dec 12 14:32:52 crc kubenswrapper[5113]: I1212 14:32:52.167216 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerStarted","Data":"b14d5e02ce90b66b351cfb0177014095891ccb9f0e036ac62a2ba4f072ea682f"} Dec 12 14:32:52 crc kubenswrapper[5113]: E1212 14:32:52.482703 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:33:05 crc kubenswrapper[5113]: E1212 14:33:05.553314 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:33:05 crc kubenswrapper[5113]: E1212 14:33:05.554885 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-65lbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wpxz5_service-telemetry(208dda3c-f1f0-4b82-9f0c-12464184846e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:33:05 crc kubenswrapper[5113]: E1212 14:33:05.556178 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:33:06 crc kubenswrapper[5113]: E1212 14:33:06.483535 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:33:16 crc kubenswrapper[5113]: E1212 14:33:16.483680 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:33:20 crc kubenswrapper[5113]: E1212 14:33:20.484188 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:33:27 crc kubenswrapper[5113]: E1212 14:33:27.486696 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:33:32 crc kubenswrapper[5113]: E1212 14:33:32.483339 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:33:40 crc kubenswrapper[5113]: E1212 14:33:40.483809 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:33:43 crc kubenswrapper[5113]: E1212 14:33:43.484199 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:33:54 crc kubenswrapper[5113]: E1212 14:33:54.483911 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:33:55 crc kubenswrapper[5113]: E1212 14:33:55.483891 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:34:06 crc kubenswrapper[5113]: E1212 14:34:06.482975 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:34:06 crc kubenswrapper[5113]: E1212 14:34:06.483163 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:34:18 crc kubenswrapper[5113]: E1212 14:34:18.483361 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:34:18 crc kubenswrapper[5113]: E1212 14:34:18.484009 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:34:30 crc kubenswrapper[5113]: E1212 14:34:30.568885 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:34:30 crc kubenswrapper[5113]: E1212 14:34:30.569550 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-65lbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wpxz5_service-telemetry(208dda3c-f1f0-4b82-9f0c-12464184846e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:34:30 crc kubenswrapper[5113]: E1212 14:34:30.570841 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:34:33 crc kubenswrapper[5113]: E1212 14:34:33.483684 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:34:41 crc kubenswrapper[5113]: E1212 14:34:41.484033 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:34:47 crc kubenswrapper[5113]: E1212 14:34:47.488256 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:34:55 crc kubenswrapper[5113]: E1212 14:34:55.483231 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:35:02 crc kubenswrapper[5113]: E1212 14:35:02.484091 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:35:06 crc kubenswrapper[5113]: I1212 14:35:06.893355 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b8fcd"] Dec 12 14:35:06 crc kubenswrapper[5113]: I1212 14:35:06.899628 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:06 crc kubenswrapper[5113]: I1212 14:35:06.903608 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b8fcd"] Dec 12 14:35:06 crc kubenswrapper[5113]: I1212 14:35:06.961191 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8hkb\" (UniqueName: \"kubernetes.io/projected/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-kube-api-access-g8hkb\") pod \"community-operators-b8fcd\" (UID: \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\") " pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:06 crc kubenswrapper[5113]: I1212 14:35:06.961230 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-utilities\") pod \"community-operators-b8fcd\" (UID: \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\") " pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:06 crc kubenswrapper[5113]: I1212 14:35:06.961264 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-catalog-content\") pod \"community-operators-b8fcd\" (UID: \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\") " pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:07 crc kubenswrapper[5113]: I1212 14:35:07.062307 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-catalog-content\") pod \"community-operators-b8fcd\" (UID: \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\") " pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:07 crc kubenswrapper[5113]: I1212 14:35:07.062927 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-catalog-content\") pod \"community-operators-b8fcd\" (UID: \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\") " pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:07 crc kubenswrapper[5113]: I1212 14:35:07.063167 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g8hkb\" (UniqueName: \"kubernetes.io/projected/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-kube-api-access-g8hkb\") pod \"community-operators-b8fcd\" (UID: \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\") " pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:07 crc kubenswrapper[5113]: I1212 14:35:07.063565 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-utilities\") pod \"community-operators-b8fcd\" (UID: \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\") " pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:07 crc kubenswrapper[5113]: I1212 14:35:07.063791 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-utilities\") pod \"community-operators-b8fcd\" (UID: \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\") " pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:07 crc kubenswrapper[5113]: I1212 14:35:07.104152 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8hkb\" (UniqueName: \"kubernetes.io/projected/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-kube-api-access-g8hkb\") pod \"community-operators-b8fcd\" (UID: \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\") " pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:07 crc kubenswrapper[5113]: I1212 14:35:07.219400 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:07 crc kubenswrapper[5113]: E1212 14:35:07.499845 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:35:07 crc kubenswrapper[5113]: I1212 14:35:07.501398 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b8fcd"] Dec 12 14:35:07 crc kubenswrapper[5113]: I1212 14:35:07.546016 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8fcd" event={"ID":"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642","Type":"ContainerStarted","Data":"a470473602f82724f10d468c206ba4059f6e86ba6a390506e7f06e1f3c8babb4"} Dec 12 14:35:08 crc kubenswrapper[5113]: I1212 14:35:08.557342 5113 generic.go:358] "Generic (PLEG): container finished" podID="f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" containerID="605dcdfe4cd9be1c1785777eb9ef019980398526bd73eea1bbe13001d5b5802f" exitCode=0 Dec 12 14:35:08 crc kubenswrapper[5113]: I1212 14:35:08.557441 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8fcd" event={"ID":"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642","Type":"ContainerDied","Data":"605dcdfe4cd9be1c1785777eb9ef019980398526bd73eea1bbe13001d5b5802f"} Dec 12 14:35:09 crc kubenswrapper[5113]: I1212 14:35:09.568593 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8fcd" event={"ID":"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642","Type":"ContainerStarted","Data":"1975bf73ac4e66294fb6eeffe63d27010fea33931f9e50f5248e92fe21edca6b"} Dec 12 14:35:10 crc kubenswrapper[5113]: I1212 14:35:10.578531 5113 generic.go:358] "Generic (PLEG): container finished" podID="f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" containerID="1975bf73ac4e66294fb6eeffe63d27010fea33931f9e50f5248e92fe21edca6b" exitCode=0 Dec 12 14:35:10 crc kubenswrapper[5113]: I1212 14:35:10.578668 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8fcd" event={"ID":"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642","Type":"ContainerDied","Data":"1975bf73ac4e66294fb6eeffe63d27010fea33931f9e50f5248e92fe21edca6b"} Dec 12 14:35:11 crc kubenswrapper[5113]: I1212 14:35:11.587516 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8fcd" event={"ID":"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642","Type":"ContainerStarted","Data":"260876d9b61716badbcc0a72e80fffb4cc62de930c518e7dffd21901229692bb"} Dec 12 14:35:11 crc kubenswrapper[5113]: I1212 14:35:11.605019 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b8fcd" podStartSLOduration=4.999322946 podStartE2EDuration="5.604999715s" podCreationTimestamp="2025-12-12 14:35:06 +0000 UTC" firstStartedPulling="2025-12-12 14:35:08.558767075 +0000 UTC m=+1491.394016942" lastFinishedPulling="2025-12-12 14:35:09.164443874 +0000 UTC m=+1491.999693711" observedRunningTime="2025-12-12 14:35:11.602696912 +0000 UTC m=+1494.437946769" watchObservedRunningTime="2025-12-12 14:35:11.604999715 +0000 UTC m=+1494.440249562" Dec 12 14:35:17 crc kubenswrapper[5113]: I1212 14:35:17.220230 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:17 crc kubenswrapper[5113]: I1212 14:35:17.220306 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:17 crc kubenswrapper[5113]: I1212 14:35:17.290190 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:17 crc kubenswrapper[5113]: E1212 14:35:17.501749 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:35:17 crc kubenswrapper[5113]: I1212 14:35:17.707564 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:17 crc kubenswrapper[5113]: I1212 14:35:17.754912 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b8fcd"] Dec 12 14:35:19 crc kubenswrapper[5113]: I1212 14:35:19.655441 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b8fcd" podUID="f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" containerName="registry-server" containerID="cri-o://260876d9b61716badbcc0a72e80fffb4cc62de930c518e7dffd21901229692bb" gracePeriod=2 Dec 12 14:35:20 crc kubenswrapper[5113]: I1212 14:35:20.085358 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hnmf9_f61630ce-4572-40eb-b245-937168ad79d4/kube-multus/0.log" Dec 12 14:35:20 crc kubenswrapper[5113]: I1212 14:35:20.085414 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hnmf9_f61630ce-4572-40eb-b245-937168ad79d4/kube-multus/0.log" Dec 12 14:35:20 crc kubenswrapper[5113]: I1212 14:35:20.090774 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:35:20 crc kubenswrapper[5113]: I1212 14:35:20.090946 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:35:20 crc kubenswrapper[5113]: I1212 14:35:20.665027 5113 generic.go:358] "Generic (PLEG): container finished" podID="f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" containerID="260876d9b61716badbcc0a72e80fffb4cc62de930c518e7dffd21901229692bb" exitCode=0 Dec 12 14:35:20 crc kubenswrapper[5113]: I1212 14:35:20.665097 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8fcd" event={"ID":"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642","Type":"ContainerDied","Data":"260876d9b61716badbcc0a72e80fffb4cc62de930c518e7dffd21901229692bb"} Dec 12 14:35:20 crc kubenswrapper[5113]: I1212 14:35:20.901964 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:35:20 crc kubenswrapper[5113]: I1212 14:35:20.902039 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.148823 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.291605 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-catalog-content\") pod \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\" (UID: \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\") " Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.291799 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-utilities\") pod \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\" (UID: \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\") " Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.291910 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8hkb\" (UniqueName: \"kubernetes.io/projected/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-kube-api-access-g8hkb\") pod \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\" (UID: \"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642\") " Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.294307 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-utilities" (OuterVolumeSpecName: "utilities") pod "f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" (UID: "f6bdc7e7-5cdb-4952-8ba0-aff611dc1642"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.302023 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-kube-api-access-g8hkb" (OuterVolumeSpecName: "kube-api-access-g8hkb") pod "f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" (UID: "f6bdc7e7-5cdb-4952-8ba0-aff611dc1642"). InnerVolumeSpecName "kube-api-access-g8hkb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.346461 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" (UID: "f6bdc7e7-5cdb-4952-8ba0-aff611dc1642"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.393684 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.393991 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.394074 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g8hkb\" (UniqueName: \"kubernetes.io/projected/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642-kube-api-access-g8hkb\") on node \"crc\" DevicePath \"\"" Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.676419 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8fcd" event={"ID":"f6bdc7e7-5cdb-4952-8ba0-aff611dc1642","Type":"ContainerDied","Data":"a470473602f82724f10d468c206ba4059f6e86ba6a390506e7f06e1f3c8babb4"} Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.676523 5113 scope.go:117] "RemoveContainer" containerID="260876d9b61716badbcc0a72e80fffb4cc62de930c518e7dffd21901229692bb" Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.676484 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8fcd" Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.701301 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b8fcd"] Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.706941 5113 scope.go:117] "RemoveContainer" containerID="1975bf73ac4e66294fb6eeffe63d27010fea33931f9e50f5248e92fe21edca6b" Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.708736 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b8fcd"] Dec 12 14:35:21 crc kubenswrapper[5113]: I1212 14:35:21.730462 5113 scope.go:117] "RemoveContainer" containerID="605dcdfe4cd9be1c1785777eb9ef019980398526bd73eea1bbe13001d5b5802f" Dec 12 14:35:22 crc kubenswrapper[5113]: E1212 14:35:22.484700 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:35:23 crc kubenswrapper[5113]: I1212 14:35:23.495626 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" path="/var/lib/kubelet/pods/f6bdc7e7-5cdb-4952-8ba0-aff611dc1642/volumes" Dec 12 14:35:30 crc kubenswrapper[5113]: E1212 14:35:30.484105 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:35:37 crc kubenswrapper[5113]: E1212 14:35:37.489967 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:35:45 crc kubenswrapper[5113]: E1212 14:35:45.483827 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.480670 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r5jbx"] Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.486698 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" containerName="extract-content" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.486763 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" containerName="extract-content" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.486832 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" containerName="registry-server" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.486853 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" containerName="registry-server" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.486921 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" containerName="extract-utilities" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.486941 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" containerName="extract-utilities" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.487228 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f6bdc7e7-5cdb-4952-8ba0-aff611dc1642" containerName="registry-server" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.639634 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.657381 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5jbx"] Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.732944 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-utilities\") pod \"certified-operators-r5jbx\" (UID: \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\") " pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.733021 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58jjs\" (UniqueName: \"kubernetes.io/projected/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-kube-api-access-58jjs\") pod \"certified-operators-r5jbx\" (UID: \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\") " pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.733047 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-catalog-content\") pod \"certified-operators-r5jbx\" (UID: \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\") " pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.833976 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-58jjs\" (UniqueName: \"kubernetes.io/projected/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-kube-api-access-58jjs\") pod \"certified-operators-r5jbx\" (UID: \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\") " pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.834024 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-catalog-content\") pod \"certified-operators-r5jbx\" (UID: \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\") " pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.834112 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-utilities\") pod \"certified-operators-r5jbx\" (UID: \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\") " pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.834672 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-catalog-content\") pod \"certified-operators-r5jbx\" (UID: \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\") " pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.834694 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-utilities\") pod \"certified-operators-r5jbx\" (UID: \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\") " pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.863183 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-58jjs\" (UniqueName: \"kubernetes.io/projected/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-kube-api-access-58jjs\") pod \"certified-operators-r5jbx\" (UID: \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\") " pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:35:49 crc kubenswrapper[5113]: I1212 14:35:49.967285 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:35:50 crc kubenswrapper[5113]: W1212 14:35:50.404788 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45fecb86_a23d_4bc7_b912_a8ed7da5b4ea.slice/crio-76b8a7fac52d12aabb45c23c6c40d9310a4e03b1539625515ecc6dd6d6b57a21 WatchSource:0}: Error finding container 76b8a7fac52d12aabb45c23c6c40d9310a4e03b1539625515ecc6dd6d6b57a21: Status 404 returned error can't find the container with id 76b8a7fac52d12aabb45c23c6c40d9310a4e03b1539625515ecc6dd6d6b57a21 Dec 12 14:35:50 crc kubenswrapper[5113]: I1212 14:35:50.408710 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5jbx"] Dec 12 14:35:50 crc kubenswrapper[5113]: E1212 14:35:50.482960 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:35:50 crc kubenswrapper[5113]: I1212 14:35:50.901574 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:35:50 crc kubenswrapper[5113]: I1212 14:35:50.902522 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:35:50 crc kubenswrapper[5113]: I1212 14:35:50.912493 5113 generic.go:358] "Generic (PLEG): container finished" podID="45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" containerID="2c08dfe9319684b92e288a696fbb90cf4dc5ec508cae6a090a823d0a810234f7" exitCode=0 Dec 12 14:35:50 crc kubenswrapper[5113]: I1212 14:35:50.912690 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5jbx" event={"ID":"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea","Type":"ContainerDied","Data":"2c08dfe9319684b92e288a696fbb90cf4dc5ec508cae6a090a823d0a810234f7"} Dec 12 14:35:50 crc kubenswrapper[5113]: I1212 14:35:50.912739 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5jbx" event={"ID":"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea","Type":"ContainerStarted","Data":"76b8a7fac52d12aabb45c23c6c40d9310a4e03b1539625515ecc6dd6d6b57a21"} Dec 12 14:35:52 crc kubenswrapper[5113]: I1212 14:35:52.929723 5113 generic.go:358] "Generic (PLEG): container finished" podID="45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" containerID="2c0170ab55dadbc1482b76a04b11e2388d24af9ac55f395c8449dde46a3e5474" exitCode=0 Dec 12 14:35:52 crc kubenswrapper[5113]: I1212 14:35:52.929816 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5jbx" event={"ID":"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea","Type":"ContainerDied","Data":"2c0170ab55dadbc1482b76a04b11e2388d24af9ac55f395c8449dde46a3e5474"} Dec 12 14:35:53 crc kubenswrapper[5113]: I1212 14:35:53.939857 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5jbx" event={"ID":"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea","Type":"ContainerStarted","Data":"cf924e060064f441565439c1b190a5fc300726a65e619ce4702885f209a3b90a"} Dec 12 14:35:53 crc kubenswrapper[5113]: I1212 14:35:53.959304 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r5jbx" podStartSLOduration=3.808025387 podStartE2EDuration="4.959284758s" podCreationTimestamp="2025-12-12 14:35:49 +0000 UTC" firstStartedPulling="2025-12-12 14:35:50.91431433 +0000 UTC m=+1533.749564197" lastFinishedPulling="2025-12-12 14:35:52.065573701 +0000 UTC m=+1534.900823568" observedRunningTime="2025-12-12 14:35:53.958549654 +0000 UTC m=+1536.793799481" watchObservedRunningTime="2025-12-12 14:35:53.959284758 +0000 UTC m=+1536.794534595" Dec 12 14:35:59 crc kubenswrapper[5113]: E1212 14:35:59.483609 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:35:59 crc kubenswrapper[5113]: I1212 14:35:59.967477 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:35:59 crc kubenswrapper[5113]: I1212 14:35:59.968233 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:36:00 crc kubenswrapper[5113]: I1212 14:36:00.014965 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:36:01 crc kubenswrapper[5113]: I1212 14:36:01.040755 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:36:01 crc kubenswrapper[5113]: I1212 14:36:01.091542 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5jbx"] Dec 12 14:36:03 crc kubenswrapper[5113]: I1212 14:36:03.011271 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r5jbx" podUID="45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" containerName="registry-server" containerID="cri-o://cf924e060064f441565439c1b190a5fc300726a65e619ce4702885f209a3b90a" gracePeriod=2 Dec 12 14:36:04 crc kubenswrapper[5113]: E1212 14:36:04.485935 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:36:04 crc kubenswrapper[5113]: I1212 14:36:04.643390 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:36:04 crc kubenswrapper[5113]: I1212 14:36:04.752530 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-utilities\") pod \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\" (UID: \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\") " Dec 12 14:36:04 crc kubenswrapper[5113]: I1212 14:36:04.752667 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58jjs\" (UniqueName: \"kubernetes.io/projected/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-kube-api-access-58jjs\") pod \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\" (UID: \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\") " Dec 12 14:36:04 crc kubenswrapper[5113]: I1212 14:36:04.752820 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-catalog-content\") pod \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\" (UID: \"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea\") " Dec 12 14:36:04 crc kubenswrapper[5113]: I1212 14:36:04.754022 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-utilities" (OuterVolumeSpecName: "utilities") pod "45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" (UID: "45fecb86-a23d-4bc7-b912-a8ed7da5b4ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:36:04 crc kubenswrapper[5113]: I1212 14:36:04.761326 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-kube-api-access-58jjs" (OuterVolumeSpecName: "kube-api-access-58jjs") pod "45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" (UID: "45fecb86-a23d-4bc7-b912-a8ed7da5b4ea"). InnerVolumeSpecName "kube-api-access-58jjs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:36:04 crc kubenswrapper[5113]: I1212 14:36:04.792823 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" (UID: "45fecb86-a23d-4bc7-b912-a8ed7da5b4ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:36:04 crc kubenswrapper[5113]: I1212 14:36:04.854905 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:36:04 crc kubenswrapper[5113]: I1212 14:36:04.854945 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:36:04 crc kubenswrapper[5113]: I1212 14:36:04.854959 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-58jjs\" (UniqueName: \"kubernetes.io/projected/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea-kube-api-access-58jjs\") on node \"crc\" DevicePath \"\"" Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.030489 5113 generic.go:358] "Generic (PLEG): container finished" podID="45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" containerID="cf924e060064f441565439c1b190a5fc300726a65e619ce4702885f209a3b90a" exitCode=0 Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.030607 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5jbx" event={"ID":"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea","Type":"ContainerDied","Data":"cf924e060064f441565439c1b190a5fc300726a65e619ce4702885f209a3b90a"} Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.030670 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5jbx" event={"ID":"45fecb86-a23d-4bc7-b912-a8ed7da5b4ea","Type":"ContainerDied","Data":"76b8a7fac52d12aabb45c23c6c40d9310a4e03b1539625515ecc6dd6d6b57a21"} Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.030699 5113 scope.go:117] "RemoveContainer" containerID="cf924e060064f441565439c1b190a5fc300726a65e619ce4702885f209a3b90a" Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.031341 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5jbx" Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.053657 5113 scope.go:117] "RemoveContainer" containerID="2c0170ab55dadbc1482b76a04b11e2388d24af9ac55f395c8449dde46a3e5474" Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.082206 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5jbx"] Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.090198 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r5jbx"] Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.094386 5113 scope.go:117] "RemoveContainer" containerID="2c08dfe9319684b92e288a696fbb90cf4dc5ec508cae6a090a823d0a810234f7" Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.111079 5113 scope.go:117] "RemoveContainer" containerID="cf924e060064f441565439c1b190a5fc300726a65e619ce4702885f209a3b90a" Dec 12 14:36:05 crc kubenswrapper[5113]: E1212 14:36:05.111716 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf924e060064f441565439c1b190a5fc300726a65e619ce4702885f209a3b90a\": container with ID starting with cf924e060064f441565439c1b190a5fc300726a65e619ce4702885f209a3b90a not found: ID does not exist" containerID="cf924e060064f441565439c1b190a5fc300726a65e619ce4702885f209a3b90a" Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.111761 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf924e060064f441565439c1b190a5fc300726a65e619ce4702885f209a3b90a"} err="failed to get container status \"cf924e060064f441565439c1b190a5fc300726a65e619ce4702885f209a3b90a\": rpc error: code = NotFound desc = could not find container \"cf924e060064f441565439c1b190a5fc300726a65e619ce4702885f209a3b90a\": container with ID starting with cf924e060064f441565439c1b190a5fc300726a65e619ce4702885f209a3b90a not found: ID does not exist" Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.111794 5113 scope.go:117] "RemoveContainer" containerID="2c0170ab55dadbc1482b76a04b11e2388d24af9ac55f395c8449dde46a3e5474" Dec 12 14:36:05 crc kubenswrapper[5113]: E1212 14:36:05.112365 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0170ab55dadbc1482b76a04b11e2388d24af9ac55f395c8449dde46a3e5474\": container with ID starting with 2c0170ab55dadbc1482b76a04b11e2388d24af9ac55f395c8449dde46a3e5474 not found: ID does not exist" containerID="2c0170ab55dadbc1482b76a04b11e2388d24af9ac55f395c8449dde46a3e5474" Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.112406 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0170ab55dadbc1482b76a04b11e2388d24af9ac55f395c8449dde46a3e5474"} err="failed to get container status \"2c0170ab55dadbc1482b76a04b11e2388d24af9ac55f395c8449dde46a3e5474\": rpc error: code = NotFound desc = could not find container \"2c0170ab55dadbc1482b76a04b11e2388d24af9ac55f395c8449dde46a3e5474\": container with ID starting with 2c0170ab55dadbc1482b76a04b11e2388d24af9ac55f395c8449dde46a3e5474 not found: ID does not exist" Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.112429 5113 scope.go:117] "RemoveContainer" containerID="2c08dfe9319684b92e288a696fbb90cf4dc5ec508cae6a090a823d0a810234f7" Dec 12 14:36:05 crc kubenswrapper[5113]: E1212 14:36:05.112850 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c08dfe9319684b92e288a696fbb90cf4dc5ec508cae6a090a823d0a810234f7\": container with ID starting with 2c08dfe9319684b92e288a696fbb90cf4dc5ec508cae6a090a823d0a810234f7 not found: ID does not exist" containerID="2c08dfe9319684b92e288a696fbb90cf4dc5ec508cae6a090a823d0a810234f7" Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.112886 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c08dfe9319684b92e288a696fbb90cf4dc5ec508cae6a090a823d0a810234f7"} err="failed to get container status \"2c08dfe9319684b92e288a696fbb90cf4dc5ec508cae6a090a823d0a810234f7\": rpc error: code = NotFound desc = could not find container \"2c08dfe9319684b92e288a696fbb90cf4dc5ec508cae6a090a823d0a810234f7\": container with ID starting with 2c08dfe9319684b92e288a696fbb90cf4dc5ec508cae6a090a823d0a810234f7 not found: ID does not exist" Dec 12 14:36:05 crc kubenswrapper[5113]: I1212 14:36:05.498469 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" path="/var/lib/kubelet/pods/45fecb86-a23d-4bc7-b912-a8ed7da5b4ea/volumes" Dec 12 14:36:12 crc kubenswrapper[5113]: E1212 14:36:12.484100 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:36:17 crc kubenswrapper[5113]: E1212 14:36:17.490432 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:36:20 crc kubenswrapper[5113]: I1212 14:36:20.902340 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:36:20 crc kubenswrapper[5113]: I1212 14:36:20.902726 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:36:20 crc kubenswrapper[5113]: I1212 14:36:20.902799 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:36:20 crc kubenswrapper[5113]: I1212 14:36:20.903791 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b14d5e02ce90b66b351cfb0177014095891ccb9f0e036ac62a2ba4f072ea682f"} pod="openshift-machine-config-operator/machine-config-daemon-5dn52" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:36:20 crc kubenswrapper[5113]: I1212 14:36:20.903923 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" containerID="cri-o://b14d5e02ce90b66b351cfb0177014095891ccb9f0e036ac62a2ba4f072ea682f" gracePeriod=600 Dec 12 14:36:21 crc kubenswrapper[5113]: I1212 14:36:21.143874 5113 generic.go:358] "Generic (PLEG): container finished" podID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerID="b14d5e02ce90b66b351cfb0177014095891ccb9f0e036ac62a2ba4f072ea682f" exitCode=0 Dec 12 14:36:21 crc kubenswrapper[5113]: I1212 14:36:21.143980 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerDied","Data":"b14d5e02ce90b66b351cfb0177014095891ccb9f0e036ac62a2ba4f072ea682f"} Dec 12 14:36:21 crc kubenswrapper[5113]: I1212 14:36:21.144507 5113 scope.go:117] "RemoveContainer" containerID="fe2e870b7341741d5ea9466176047f43fbf07ffe98dd1d2f8d6d6cfa613db1c4" Dec 12 14:36:22 crc kubenswrapper[5113]: I1212 14:36:22.153679 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerStarted","Data":"64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449"} Dec 12 14:36:26 crc kubenswrapper[5113]: E1212 14:36:26.483649 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:36:30 crc kubenswrapper[5113]: E1212 14:36:30.483568 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:36:40 crc kubenswrapper[5113]: E1212 14:36:40.483228 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:36:41 crc kubenswrapper[5113]: E1212 14:36:41.482722 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:36:55 crc kubenswrapper[5113]: E1212 14:36:55.483751 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:36:56 crc kubenswrapper[5113]: E1212 14:36:56.483615 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:37:09 crc kubenswrapper[5113]: E1212 14:37:09.484344 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:37:10 crc kubenswrapper[5113]: E1212 14:37:10.483903 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:37:24 crc kubenswrapper[5113]: I1212 14:37:24.483249 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 14:37:24 crc kubenswrapper[5113]: E1212 14:37:24.484024 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:37:24 crc kubenswrapper[5113]: E1212 14:37:24.547786 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:37:24 crc kubenswrapper[5113]: E1212 14:37:24.548051 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-65lbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-wpxz5_service-telemetry(208dda3c-f1f0-4b82-9f0c-12464184846e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:37:24 crc kubenswrapper[5113]: E1212 14:37:24.549285 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:37:36 crc kubenswrapper[5113]: E1212 14:37:36.563818 5113 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 14:37:36 crc kubenswrapper[5113]: E1212 14:37:36.564987 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dbqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hgmfl_service-telemetry(aec662d5-147a-4efb-ac69-80a0fc01a91e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 14:37:36 crc kubenswrapper[5113]: E1212 14:37:36.566349 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:37:36 crc kubenswrapper[5113]: I1212 14:37:36.811521 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b5zv6"] Dec 12 14:37:36 crc kubenswrapper[5113]: I1212 14:37:36.812219 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" containerName="extract-utilities" Dec 12 14:37:36 crc kubenswrapper[5113]: I1212 14:37:36.812241 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" containerName="extract-utilities" Dec 12 14:37:36 crc kubenswrapper[5113]: I1212 14:37:36.812260 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" containerName="extract-content" Dec 12 14:37:36 crc kubenswrapper[5113]: I1212 14:37:36.812266 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" containerName="extract-content" Dec 12 14:37:36 crc kubenswrapper[5113]: I1212 14:37:36.812275 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" containerName="registry-server" Dec 12 14:37:36 crc kubenswrapper[5113]: I1212 14:37:36.812281 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" containerName="registry-server" Dec 12 14:37:36 crc kubenswrapper[5113]: I1212 14:37:36.812377 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="45fecb86-a23d-4bc7-b912-a8ed7da5b4ea" containerName="registry-server" Dec 12 14:37:36 crc kubenswrapper[5113]: I1212 14:37:36.845154 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b5zv6"] Dec 12 14:37:36 crc kubenswrapper[5113]: I1212 14:37:36.845431 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:36 crc kubenswrapper[5113]: I1212 14:37:36.942426 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwlth\" (UniqueName: \"kubernetes.io/projected/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-kube-api-access-dwlth\") pod \"redhat-operators-b5zv6\" (UID: \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\") " pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:36 crc kubenswrapper[5113]: I1212 14:37:36.942484 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-catalog-content\") pod \"redhat-operators-b5zv6\" (UID: \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\") " pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:36 crc kubenswrapper[5113]: I1212 14:37:36.942691 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-utilities\") pod \"redhat-operators-b5zv6\" (UID: \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\") " pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:37 crc kubenswrapper[5113]: I1212 14:37:37.044456 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dwlth\" (UniqueName: \"kubernetes.io/projected/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-kube-api-access-dwlth\") pod \"redhat-operators-b5zv6\" (UID: \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\") " pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:37 crc kubenswrapper[5113]: I1212 14:37:37.044512 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-catalog-content\") pod \"redhat-operators-b5zv6\" (UID: \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\") " pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:37 crc kubenswrapper[5113]: I1212 14:37:37.044595 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-utilities\") pod \"redhat-operators-b5zv6\" (UID: \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\") " pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:37 crc kubenswrapper[5113]: I1212 14:37:37.045239 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-catalog-content\") pod \"redhat-operators-b5zv6\" (UID: \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\") " pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:37 crc kubenswrapper[5113]: I1212 14:37:37.045283 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-utilities\") pod \"redhat-operators-b5zv6\" (UID: \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\") " pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:37 crc kubenswrapper[5113]: I1212 14:37:37.063768 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwlth\" (UniqueName: \"kubernetes.io/projected/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-kube-api-access-dwlth\") pod \"redhat-operators-b5zv6\" (UID: \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\") " pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:37 crc kubenswrapper[5113]: I1212 14:37:37.190430 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:37 crc kubenswrapper[5113]: I1212 14:37:37.607997 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b5zv6"] Dec 12 14:37:37 crc kubenswrapper[5113]: I1212 14:37:37.685102 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5zv6" event={"ID":"bb8352ce-0dae-4287-a1ef-b33fb0c4831c","Type":"ContainerStarted","Data":"523a79c97f67924678f345d89bb425b6c6f9cf1c34821d3f39fe394edf364914"} Dec 12 14:37:38 crc kubenswrapper[5113]: I1212 14:37:38.694748 5113 generic.go:358] "Generic (PLEG): container finished" podID="bb8352ce-0dae-4287-a1ef-b33fb0c4831c" containerID="c8f7aed3ff709ea5785d7637f4d98b3af358d24cc3cbd6ea392985079b86b308" exitCode=0 Dec 12 14:37:38 crc kubenswrapper[5113]: I1212 14:37:38.694821 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5zv6" event={"ID":"bb8352ce-0dae-4287-a1ef-b33fb0c4831c","Type":"ContainerDied","Data":"c8f7aed3ff709ea5785d7637f4d98b3af358d24cc3cbd6ea392985079b86b308"} Dec 12 14:37:39 crc kubenswrapper[5113]: E1212 14:37:39.483807 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:37:40 crc kubenswrapper[5113]: I1212 14:37:40.711002 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5zv6" event={"ID":"bb8352ce-0dae-4287-a1ef-b33fb0c4831c","Type":"ContainerStarted","Data":"63c38c8bd1987fa48d5b374f614955cfe153e87f331248cbd2f1c7fbadd9269b"} Dec 12 14:37:41 crc kubenswrapper[5113]: I1212 14:37:41.721446 5113 generic.go:358] "Generic (PLEG): container finished" podID="bb8352ce-0dae-4287-a1ef-b33fb0c4831c" containerID="63c38c8bd1987fa48d5b374f614955cfe153e87f331248cbd2f1c7fbadd9269b" exitCode=0 Dec 12 14:37:41 crc kubenswrapper[5113]: I1212 14:37:41.721549 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5zv6" event={"ID":"bb8352ce-0dae-4287-a1ef-b33fb0c4831c","Type":"ContainerDied","Data":"63c38c8bd1987fa48d5b374f614955cfe153e87f331248cbd2f1c7fbadd9269b"} Dec 12 14:37:42 crc kubenswrapper[5113]: I1212 14:37:42.732278 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5zv6" event={"ID":"bb8352ce-0dae-4287-a1ef-b33fb0c4831c","Type":"ContainerStarted","Data":"e094053175d9ec1474d7c3034905b47bf5e0ef1e077229b301a41bff64abc312"} Dec 12 14:37:42 crc kubenswrapper[5113]: I1212 14:37:42.757133 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b5zv6" podStartSLOduration=5.510143444 podStartE2EDuration="6.75709322s" podCreationTimestamp="2025-12-12 14:37:36 +0000 UTC" firstStartedPulling="2025-12-12 14:37:38.697263222 +0000 UTC m=+1641.532513089" lastFinishedPulling="2025-12-12 14:37:39.944212998 +0000 UTC m=+1642.779462865" observedRunningTime="2025-12-12 14:37:42.750588514 +0000 UTC m=+1645.585838361" watchObservedRunningTime="2025-12-12 14:37:42.75709322 +0000 UTC m=+1645.592343057" Dec 12 14:37:47 crc kubenswrapper[5113]: I1212 14:37:47.191305 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:47 crc kubenswrapper[5113]: I1212 14:37:47.191393 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:47 crc kubenswrapper[5113]: I1212 14:37:47.247982 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:47 crc kubenswrapper[5113]: I1212 14:37:47.825095 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:47 crc kubenswrapper[5113]: I1212 14:37:47.870311 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b5zv6"] Dec 12 14:37:49 crc kubenswrapper[5113]: I1212 14:37:49.793659 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b5zv6" podUID="bb8352ce-0dae-4287-a1ef-b33fb0c4831c" containerName="registry-server" containerID="cri-o://e094053175d9ec1474d7c3034905b47bf5e0ef1e077229b301a41bff64abc312" gracePeriod=2 Dec 12 14:37:50 crc kubenswrapper[5113]: E1212 14:37:50.483661 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:37:51 crc kubenswrapper[5113]: E1212 14:37:51.482778 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:37:52 crc kubenswrapper[5113]: I1212 14:37:52.619502 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b7xt7/must-gather-mhjdv"] Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.460827 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b7xt7/must-gather-mhjdv"] Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.460971 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b7xt7/must-gather-mhjdv" Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.463675 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-b7xt7\"/\"default-dockercfg-wtxcz\"" Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.463723 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-b7xt7\"/\"kube-root-ca.crt\"" Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.463925 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-b7xt7\"/\"openshift-service-ca.crt\"" Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.504786 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/31153298-d269-4ccf-a13f-364f7f59f617-must-gather-output\") pod \"must-gather-mhjdv\" (UID: \"31153298-d269-4ccf-a13f-364f7f59f617\") " pod="openshift-must-gather-b7xt7/must-gather-mhjdv" Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.504855 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zhwz\" (UniqueName: \"kubernetes.io/projected/31153298-d269-4ccf-a13f-364f7f59f617-kube-api-access-2zhwz\") pod \"must-gather-mhjdv\" (UID: \"31153298-d269-4ccf-a13f-364f7f59f617\") " pod="openshift-must-gather-b7xt7/must-gather-mhjdv" Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.606088 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/31153298-d269-4ccf-a13f-364f7f59f617-must-gather-output\") pod \"must-gather-mhjdv\" (UID: \"31153298-d269-4ccf-a13f-364f7f59f617\") " pod="openshift-must-gather-b7xt7/must-gather-mhjdv" Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.606182 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zhwz\" (UniqueName: \"kubernetes.io/projected/31153298-d269-4ccf-a13f-364f7f59f617-kube-api-access-2zhwz\") pod \"must-gather-mhjdv\" (UID: \"31153298-d269-4ccf-a13f-364f7f59f617\") " pod="openshift-must-gather-b7xt7/must-gather-mhjdv" Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.606770 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/31153298-d269-4ccf-a13f-364f7f59f617-must-gather-output\") pod \"must-gather-mhjdv\" (UID: \"31153298-d269-4ccf-a13f-364f7f59f617\") " pod="openshift-must-gather-b7xt7/must-gather-mhjdv" Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.630112 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zhwz\" (UniqueName: \"kubernetes.io/projected/31153298-d269-4ccf-a13f-364f7f59f617-kube-api-access-2zhwz\") pod \"must-gather-mhjdv\" (UID: \"31153298-d269-4ccf-a13f-364f7f59f617\") " pod="openshift-must-gather-b7xt7/must-gather-mhjdv" Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.776243 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b7xt7/must-gather-mhjdv" Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.824615 5113 generic.go:358] "Generic (PLEG): container finished" podID="bb8352ce-0dae-4287-a1ef-b33fb0c4831c" containerID="e094053175d9ec1474d7c3034905b47bf5e0ef1e077229b301a41bff64abc312" exitCode=0 Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.824695 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5zv6" event={"ID":"bb8352ce-0dae-4287-a1ef-b33fb0c4831c","Type":"ContainerDied","Data":"e094053175d9ec1474d7c3034905b47bf5e0ef1e077229b301a41bff64abc312"} Dec 12 14:37:53 crc kubenswrapper[5113]: I1212 14:37:53.974777 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b7xt7/must-gather-mhjdv"] Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.227012 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.319047 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwlth\" (UniqueName: \"kubernetes.io/projected/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-kube-api-access-dwlth\") pod \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\" (UID: \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\") " Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.319168 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-catalog-content\") pod \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\" (UID: \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\") " Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.319251 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-utilities\") pod \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\" (UID: \"bb8352ce-0dae-4287-a1ef-b33fb0c4831c\") " Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.320705 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-utilities" (OuterVolumeSpecName: "utilities") pod "bb8352ce-0dae-4287-a1ef-b33fb0c4831c" (UID: "bb8352ce-0dae-4287-a1ef-b33fb0c4831c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.324228 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-kube-api-access-dwlth" (OuterVolumeSpecName: "kube-api-access-dwlth") pod "bb8352ce-0dae-4287-a1ef-b33fb0c4831c" (UID: "bb8352ce-0dae-4287-a1ef-b33fb0c4831c"). InnerVolumeSpecName "kube-api-access-dwlth". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.420356 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.420605 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dwlth\" (UniqueName: \"kubernetes.io/projected/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-kube-api-access-dwlth\") on node \"crc\" DevicePath \"\"" Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.436005 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb8352ce-0dae-4287-a1ef-b33fb0c4831c" (UID: "bb8352ce-0dae-4287-a1ef-b33fb0c4831c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.521968 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8352ce-0dae-4287-a1ef-b33fb0c4831c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.835025 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5zv6" event={"ID":"bb8352ce-0dae-4287-a1ef-b33fb0c4831c","Type":"ContainerDied","Data":"523a79c97f67924678f345d89bb425b6c6f9cf1c34821d3f39fe394edf364914"} Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.835076 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5zv6" Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.835142 5113 scope.go:117] "RemoveContainer" containerID="e094053175d9ec1474d7c3034905b47bf5e0ef1e077229b301a41bff64abc312" Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.837067 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b7xt7/must-gather-mhjdv" event={"ID":"31153298-d269-4ccf-a13f-364f7f59f617","Type":"ContainerStarted","Data":"0272f5b65964bd5b5ffe8d283ced5f2b865ff13d4e47b45699342d0bf6201158"} Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.865735 5113 scope.go:117] "RemoveContainer" containerID="63c38c8bd1987fa48d5b374f614955cfe153e87f331248cbd2f1c7fbadd9269b" Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.876309 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b5zv6"] Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.881817 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b5zv6"] Dec 12 14:37:54 crc kubenswrapper[5113]: I1212 14:37:54.905606 5113 scope.go:117] "RemoveContainer" containerID="c8f7aed3ff709ea5785d7637f4d98b3af358d24cc3cbd6ea392985079b86b308" Dec 12 14:37:55 crc kubenswrapper[5113]: I1212 14:37:55.493381 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb8352ce-0dae-4287-a1ef-b33fb0c4831c" path="/var/lib/kubelet/pods/bb8352ce-0dae-4287-a1ef-b33fb0c4831c/volumes" Dec 12 14:37:58 crc kubenswrapper[5113]: I1212 14:37:58.487331 5113 ???:1] "http: TLS handshake error from 192.168.126.11:52816: no serving certificate available for the kubelet" Dec 12 14:38:00 crc kubenswrapper[5113]: I1212 14:38:00.881150 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b7xt7/must-gather-mhjdv" event={"ID":"31153298-d269-4ccf-a13f-364f7f59f617","Type":"ContainerStarted","Data":"8e8a1a19be6b633f27599a36fe22baef372b911b8407002bee3fd735b9222b38"} Dec 12 14:38:00 crc kubenswrapper[5113]: I1212 14:38:00.881405 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b7xt7/must-gather-mhjdv" event={"ID":"31153298-d269-4ccf-a13f-364f7f59f617","Type":"ContainerStarted","Data":"b1145a8657c190281d59f285155c027261970047b2f53b386698d5ecf7ba40a8"} Dec 12 14:38:00 crc kubenswrapper[5113]: I1212 14:38:00.904722 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-b7xt7/must-gather-mhjdv" podStartSLOduration=3.046920692 podStartE2EDuration="8.904703385s" podCreationTimestamp="2025-12-12 14:37:52 +0000 UTC" firstStartedPulling="2025-12-12 14:37:53.98359908 +0000 UTC m=+1656.818848907" lastFinishedPulling="2025-12-12 14:37:59.841381763 +0000 UTC m=+1662.676631600" observedRunningTime="2025-12-12 14:38:00.900359777 +0000 UTC m=+1663.735609644" watchObservedRunningTime="2025-12-12 14:38:00.904703385 +0000 UTC m=+1663.739953212" Dec 12 14:38:02 crc kubenswrapper[5113]: I1212 14:38:02.706015 5113 ???:1] "http: TLS handshake error from 192.168.126.11:52826: no serving certificate available for the kubelet" Dec 12 14:38:03 crc kubenswrapper[5113]: E1212 14:38:03.518686 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:38:05 crc kubenswrapper[5113]: E1212 14:38:05.490180 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:38:17 crc kubenswrapper[5113]: E1212 14:38:17.491710 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:38:18 crc kubenswrapper[5113]: E1212 14:38:18.482901 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:38:31 crc kubenswrapper[5113]: E1212 14:38:31.483623 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:38:32 crc kubenswrapper[5113]: E1212 14:38:32.483160 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:38:37 crc kubenswrapper[5113]: I1212 14:38:37.768639 5113 ???:1] "http: TLS handshake error from 192.168.126.11:57506: no serving certificate available for the kubelet" Dec 12 14:38:37 crc kubenswrapper[5113]: I1212 14:38:37.861625 5113 ???:1] "http: TLS handshake error from 192.168.126.11:57520: no serving certificate available for the kubelet" Dec 12 14:38:37 crc kubenswrapper[5113]: I1212 14:38:37.907151 5113 ???:1] "http: TLS handshake error from 192.168.126.11:57528: no serving certificate available for the kubelet" Dec 12 14:38:42 crc kubenswrapper[5113]: E1212 14:38:42.485204 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:38:45 crc kubenswrapper[5113]: E1212 14:38:45.483108 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:38:48 crc kubenswrapper[5113]: I1212 14:38:48.945956 5113 ???:1] "http: TLS handshake error from 192.168.126.11:47634: no serving certificate available for the kubelet" Dec 12 14:38:49 crc kubenswrapper[5113]: I1212 14:38:49.060549 5113 ???:1] "http: TLS handshake error from 192.168.126.11:47638: no serving certificate available for the kubelet" Dec 12 14:38:49 crc kubenswrapper[5113]: I1212 14:38:49.112337 5113 ???:1] "http: TLS handshake error from 192.168.126.11:47654: no serving certificate available for the kubelet" Dec 12 14:38:50 crc kubenswrapper[5113]: I1212 14:38:50.901869 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:38:50 crc kubenswrapper[5113]: I1212 14:38:50.901938 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:38:56 crc kubenswrapper[5113]: E1212 14:38:56.484426 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:38:58 crc kubenswrapper[5113]: E1212 14:38:58.483381 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:39:03 crc kubenswrapper[5113]: I1212 14:39:03.056262 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35224: no serving certificate available for the kubelet" Dec 12 14:39:03 crc kubenswrapper[5113]: I1212 14:39:03.255813 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35238: no serving certificate available for the kubelet" Dec 12 14:39:03 crc kubenswrapper[5113]: I1212 14:39:03.263485 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35242: no serving certificate available for the kubelet" Dec 12 14:39:03 crc kubenswrapper[5113]: I1212 14:39:03.268316 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35252: no serving certificate available for the kubelet" Dec 12 14:39:03 crc kubenswrapper[5113]: I1212 14:39:03.448751 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35262: no serving certificate available for the kubelet" Dec 12 14:39:03 crc kubenswrapper[5113]: I1212 14:39:03.454363 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35274: no serving certificate available for the kubelet" Dec 12 14:39:03 crc kubenswrapper[5113]: I1212 14:39:03.454579 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35284: no serving certificate available for the kubelet" Dec 12 14:39:03 crc kubenswrapper[5113]: I1212 14:39:03.623842 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35286: no serving certificate available for the kubelet" Dec 12 14:39:03 crc kubenswrapper[5113]: I1212 14:39:03.848583 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35302: no serving certificate available for the kubelet" Dec 12 14:39:03 crc kubenswrapper[5113]: I1212 14:39:03.850027 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35306: no serving certificate available for the kubelet" Dec 12 14:39:03 crc kubenswrapper[5113]: I1212 14:39:03.855538 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35316: no serving certificate available for the kubelet" Dec 12 14:39:03 crc kubenswrapper[5113]: I1212 14:39:03.992625 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35330: no serving certificate available for the kubelet" Dec 12 14:39:04 crc kubenswrapper[5113]: I1212 14:39:04.026303 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35334: no serving certificate available for the kubelet" Dec 12 14:39:04 crc kubenswrapper[5113]: I1212 14:39:04.036663 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35338: no serving certificate available for the kubelet" Dec 12 14:39:04 crc kubenswrapper[5113]: I1212 14:39:04.157320 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35352: no serving certificate available for the kubelet" Dec 12 14:39:04 crc kubenswrapper[5113]: I1212 14:39:04.317589 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35366: no serving certificate available for the kubelet" Dec 12 14:39:04 crc kubenswrapper[5113]: I1212 14:39:04.330223 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35372: no serving certificate available for the kubelet" Dec 12 14:39:04 crc kubenswrapper[5113]: I1212 14:39:04.343088 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35388: no serving certificate available for the kubelet" Dec 12 14:39:05 crc kubenswrapper[5113]: I1212 14:39:05.408582 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35398: no serving certificate available for the kubelet" Dec 12 14:39:05 crc kubenswrapper[5113]: I1212 14:39:05.410274 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35400: no serving certificate available for the kubelet" Dec 12 14:39:05 crc kubenswrapper[5113]: I1212 14:39:05.415932 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35412: no serving certificate available for the kubelet" Dec 12 14:39:05 crc kubenswrapper[5113]: I1212 14:39:05.631099 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59770: no serving certificate available for the kubelet" Dec 12 14:39:05 crc kubenswrapper[5113]: I1212 14:39:05.789064 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59778: no serving certificate available for the kubelet" Dec 12 14:39:05 crc kubenswrapper[5113]: I1212 14:39:05.795453 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59792: no serving certificate available for the kubelet" Dec 12 14:39:05 crc kubenswrapper[5113]: I1212 14:39:05.797544 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59806: no serving certificate available for the kubelet" Dec 12 14:39:05 crc kubenswrapper[5113]: I1212 14:39:05.944796 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59810: no serving certificate available for the kubelet" Dec 12 14:39:05 crc kubenswrapper[5113]: I1212 14:39:05.952419 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59814: no serving certificate available for the kubelet" Dec 12 14:39:05 crc kubenswrapper[5113]: I1212 14:39:05.956742 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59830: no serving certificate available for the kubelet" Dec 12 14:39:06 crc kubenswrapper[5113]: I1212 14:39:06.126395 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59846: no serving certificate available for the kubelet" Dec 12 14:39:06 crc kubenswrapper[5113]: I1212 14:39:06.463816 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59850: no serving certificate available for the kubelet" Dec 12 14:39:06 crc kubenswrapper[5113]: I1212 14:39:06.479909 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59856: no serving certificate available for the kubelet" Dec 12 14:39:06 crc kubenswrapper[5113]: I1212 14:39:06.522226 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59862: no serving certificate available for the kubelet" Dec 12 14:39:06 crc kubenswrapper[5113]: I1212 14:39:06.613577 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59866: no serving certificate available for the kubelet" Dec 12 14:39:06 crc kubenswrapper[5113]: I1212 14:39:06.653473 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59870: no serving certificate available for the kubelet" Dec 12 14:39:06 crc kubenswrapper[5113]: I1212 14:39:06.654729 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59876: no serving certificate available for the kubelet" Dec 12 14:39:06 crc kubenswrapper[5113]: I1212 14:39:06.734386 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59884: no serving certificate available for the kubelet" Dec 12 14:39:06 crc kubenswrapper[5113]: I1212 14:39:06.824820 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59900: no serving certificate available for the kubelet" Dec 12 14:39:06 crc kubenswrapper[5113]: I1212 14:39:06.922900 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59916: no serving certificate available for the kubelet" Dec 12 14:39:06 crc kubenswrapper[5113]: I1212 14:39:06.939285 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59932: no serving certificate available for the kubelet" Dec 12 14:39:06 crc kubenswrapper[5113]: I1212 14:39:06.948622 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59936: no serving certificate available for the kubelet" Dec 12 14:39:07 crc kubenswrapper[5113]: I1212 14:39:07.097876 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59940: no serving certificate available for the kubelet" Dec 12 14:39:07 crc kubenswrapper[5113]: I1212 14:39:07.122500 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59948: no serving certificate available for the kubelet" Dec 12 14:39:07 crc kubenswrapper[5113]: I1212 14:39:07.123088 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59958: no serving certificate available for the kubelet" Dec 12 14:39:08 crc kubenswrapper[5113]: E1212 14:39:08.483248 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:39:12 crc kubenswrapper[5113]: E1212 14:39:12.482765 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:39:18 crc kubenswrapper[5113]: I1212 14:39:18.698220 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59992: no serving certificate available for the kubelet" Dec 12 14:39:18 crc kubenswrapper[5113]: I1212 14:39:18.860369 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59994: no serving certificate available for the kubelet" Dec 12 14:39:18 crc kubenswrapper[5113]: I1212 14:39:18.876252 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59996: no serving certificate available for the kubelet" Dec 12 14:39:19 crc kubenswrapper[5113]: I1212 14:39:19.005632 5113 ???:1] "http: TLS handshake error from 192.168.126.11:59998: no serving certificate available for the kubelet" Dec 12 14:39:19 crc kubenswrapper[5113]: I1212 14:39:19.021164 5113 ???:1] "http: TLS handshake error from 192.168.126.11:60006: no serving certificate available for the kubelet" Dec 12 14:39:20 crc kubenswrapper[5113]: I1212 14:39:20.901391 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:39:20 crc kubenswrapper[5113]: I1212 14:39:20.901483 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:39:22 crc kubenswrapper[5113]: E1212 14:39:22.483792 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:39:26 crc kubenswrapper[5113]: E1212 14:39:26.483376 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:39:37 crc kubenswrapper[5113]: E1212 14:39:37.482874 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:39:40 crc kubenswrapper[5113]: E1212 14:39:40.483266 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:39:50 crc kubenswrapper[5113]: I1212 14:39:50.901785 5113 patch_prober.go:28] interesting pod/machine-config-daemon-5dn52 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 14:39:50 crc kubenswrapper[5113]: I1212 14:39:50.903747 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 14:39:50 crc kubenswrapper[5113]: I1212 14:39:50.903890 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" Dec 12 14:39:50 crc kubenswrapper[5113]: I1212 14:39:50.904584 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449"} pod="openshift-machine-config-operator/machine-config-daemon-5dn52" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 14:39:50 crc kubenswrapper[5113]: I1212 14:39:50.904741 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerName="machine-config-daemon" containerID="cri-o://64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" gracePeriod=600 Dec 12 14:39:51 crc kubenswrapper[5113]: E1212 14:39:51.046403 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dn52_openshift-machine-config-operator(5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" Dec 12 14:39:51 crc kubenswrapper[5113]: E1212 14:39:51.483048 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:39:51 crc kubenswrapper[5113]: I1212 14:39:51.768813 5113 generic.go:358] "Generic (PLEG): container finished" podID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" containerID="64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" exitCode=0 Dec 12 14:39:51 crc kubenswrapper[5113]: I1212 14:39:51.769198 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" event={"ID":"5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68","Type":"ContainerDied","Data":"64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449"} Dec 12 14:39:51 crc kubenswrapper[5113]: I1212 14:39:51.769342 5113 scope.go:117] "RemoveContainer" containerID="b14d5e02ce90b66b351cfb0177014095891ccb9f0e036ac62a2ba4f072ea682f" Dec 12 14:39:51 crc kubenswrapper[5113]: I1212 14:39:51.769843 5113 scope.go:117] "RemoveContainer" containerID="64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" Dec 12 14:39:51 crc kubenswrapper[5113]: E1212 14:39:51.770186 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dn52_openshift-machine-config-operator(5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" Dec 12 14:39:54 crc kubenswrapper[5113]: E1212 14:39:54.483334 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:39:55 crc kubenswrapper[5113]: I1212 14:39:55.804168 5113 generic.go:358] "Generic (PLEG): container finished" podID="31153298-d269-4ccf-a13f-364f7f59f617" containerID="b1145a8657c190281d59f285155c027261970047b2f53b386698d5ecf7ba40a8" exitCode=0 Dec 12 14:39:55 crc kubenswrapper[5113]: I1212 14:39:55.804300 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b7xt7/must-gather-mhjdv" event={"ID":"31153298-d269-4ccf-a13f-364f7f59f617","Type":"ContainerDied","Data":"b1145a8657c190281d59f285155c027261970047b2f53b386698d5ecf7ba40a8"} Dec 12 14:39:55 crc kubenswrapper[5113]: I1212 14:39:55.804707 5113 scope.go:117] "RemoveContainer" containerID="b1145a8657c190281d59f285155c027261970047b2f53b386698d5ecf7ba40a8" Dec 12 14:40:02 crc kubenswrapper[5113]: I1212 14:40:02.482680 5113 scope.go:117] "RemoveContainer" containerID="64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" Dec 12 14:40:02 crc kubenswrapper[5113]: E1212 14:40:02.483710 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dn52_openshift-machine-config-operator(5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" Dec 12 14:40:02 crc kubenswrapper[5113]: I1212 14:40:02.732969 5113 ???:1] "http: TLS handshake error from 192.168.126.11:49928: no serving certificate available for the kubelet" Dec 12 14:40:02 crc kubenswrapper[5113]: I1212 14:40:02.914227 5113 ???:1] "http: TLS handshake error from 192.168.126.11:49936: no serving certificate available for the kubelet" Dec 12 14:40:02 crc kubenswrapper[5113]: I1212 14:40:02.924176 5113 ???:1] "http: TLS handshake error from 192.168.126.11:49946: no serving certificate available for the kubelet" Dec 12 14:40:02 crc kubenswrapper[5113]: I1212 14:40:02.946633 5113 ???:1] "http: TLS handshake error from 192.168.126.11:49958: no serving certificate available for the kubelet" Dec 12 14:40:02 crc kubenswrapper[5113]: I1212 14:40:02.956130 5113 ???:1] "http: TLS handshake error from 192.168.126.11:49960: no serving certificate available for the kubelet" Dec 12 14:40:02 crc kubenswrapper[5113]: I1212 14:40:02.970223 5113 ???:1] "http: TLS handshake error from 192.168.126.11:49970: no serving certificate available for the kubelet" Dec 12 14:40:02 crc kubenswrapper[5113]: I1212 14:40:02.981491 5113 ???:1] "http: TLS handshake error from 192.168.126.11:49976: no serving certificate available for the kubelet" Dec 12 14:40:02 crc kubenswrapper[5113]: I1212 14:40:02.993701 5113 ???:1] "http: TLS handshake error from 192.168.126.11:49980: no serving certificate available for the kubelet" Dec 12 14:40:03 crc kubenswrapper[5113]: I1212 14:40:03.003330 5113 ???:1] "http: TLS handshake error from 192.168.126.11:49982: no serving certificate available for the kubelet" Dec 12 14:40:03 crc kubenswrapper[5113]: I1212 14:40:03.162734 5113 ???:1] "http: TLS handshake error from 192.168.126.11:49990: no serving certificate available for the kubelet" Dec 12 14:40:03 crc kubenswrapper[5113]: I1212 14:40:03.174380 5113 ???:1] "http: TLS handshake error from 192.168.126.11:49998: no serving certificate available for the kubelet" Dec 12 14:40:03 crc kubenswrapper[5113]: I1212 14:40:03.197770 5113 ???:1] "http: TLS handshake error from 192.168.126.11:50010: no serving certificate available for the kubelet" Dec 12 14:40:03 crc kubenswrapper[5113]: I1212 14:40:03.207820 5113 ???:1] "http: TLS handshake error from 192.168.126.11:50026: no serving certificate available for the kubelet" Dec 12 14:40:03 crc kubenswrapper[5113]: I1212 14:40:03.222723 5113 ???:1] "http: TLS handshake error from 192.168.126.11:50034: no serving certificate available for the kubelet" Dec 12 14:40:03 crc kubenswrapper[5113]: I1212 14:40:03.232435 5113 ???:1] "http: TLS handshake error from 192.168.126.11:50040: no serving certificate available for the kubelet" Dec 12 14:40:03 crc kubenswrapper[5113]: I1212 14:40:03.244748 5113 ???:1] "http: TLS handshake error from 192.168.126.11:50042: no serving certificate available for the kubelet" Dec 12 14:40:03 crc kubenswrapper[5113]: I1212 14:40:03.255679 5113 ???:1] "http: TLS handshake error from 192.168.126.11:50050: no serving certificate available for the kubelet" Dec 12 14:40:05 crc kubenswrapper[5113]: E1212 14:40:05.483336 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:40:08 crc kubenswrapper[5113]: I1212 14:40:08.299752 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-b7xt7/must-gather-mhjdv"] Dec 12 14:40:08 crc kubenswrapper[5113]: I1212 14:40:08.300236 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-b7xt7/must-gather-mhjdv" podUID="31153298-d269-4ccf-a13f-364f7f59f617" containerName="copy" containerID="cri-o://8e8a1a19be6b633f27599a36fe22baef372b911b8407002bee3fd735b9222b38" gracePeriod=2 Dec 12 14:40:08 crc kubenswrapper[5113]: I1212 14:40:08.302089 5113 status_manager.go:895] "Failed to get status for pod" podUID="31153298-d269-4ccf-a13f-364f7f59f617" pod="openshift-must-gather-b7xt7/must-gather-mhjdv" err="pods \"must-gather-mhjdv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-b7xt7\": no relationship found between node 'crc' and this object" Dec 12 14:40:08 crc kubenswrapper[5113]: I1212 14:40:08.313230 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-b7xt7/must-gather-mhjdv"] Dec 12 14:40:08 crc kubenswrapper[5113]: E1212 14:40:08.483385 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.524899 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b7xt7_must-gather-mhjdv_31153298-d269-4ccf-a13f-364f7f59f617/copy/0.log" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.525604 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b7xt7/must-gather-mhjdv" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.603658 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zhwz\" (UniqueName: \"kubernetes.io/projected/31153298-d269-4ccf-a13f-364f7f59f617-kube-api-access-2zhwz\") pod \"31153298-d269-4ccf-a13f-364f7f59f617\" (UID: \"31153298-d269-4ccf-a13f-364f7f59f617\") " Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.603719 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/31153298-d269-4ccf-a13f-364f7f59f617-must-gather-output\") pod \"31153298-d269-4ccf-a13f-364f7f59f617\" (UID: \"31153298-d269-4ccf-a13f-364f7f59f617\") " Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.613785 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31153298-d269-4ccf-a13f-364f7f59f617-kube-api-access-2zhwz" (OuterVolumeSpecName: "kube-api-access-2zhwz") pod "31153298-d269-4ccf-a13f-364f7f59f617" (UID: "31153298-d269-4ccf-a13f-364f7f59f617"). InnerVolumeSpecName "kube-api-access-2zhwz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.644837 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31153298-d269-4ccf-a13f-364f7f59f617-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "31153298-d269-4ccf-a13f-364f7f59f617" (UID: "31153298-d269-4ccf-a13f-364f7f59f617"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.705342 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2zhwz\" (UniqueName: \"kubernetes.io/projected/31153298-d269-4ccf-a13f-364f7f59f617-kube-api-access-2zhwz\") on node \"crc\" DevicePath \"\"" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.705603 5113 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/31153298-d269-4ccf-a13f-364f7f59f617-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.894263 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b7xt7_must-gather-mhjdv_31153298-d269-4ccf-a13f-364f7f59f617/copy/0.log" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.894715 5113 generic.go:358] "Generic (PLEG): container finished" podID="31153298-d269-4ccf-a13f-364f7f59f617" containerID="8e8a1a19be6b633f27599a36fe22baef372b911b8407002bee3fd735b9222b38" exitCode=143 Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.894808 5113 scope.go:117] "RemoveContainer" containerID="8e8a1a19be6b633f27599a36fe22baef372b911b8407002bee3fd735b9222b38" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.894953 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b7xt7/must-gather-mhjdv" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.919361 5113 scope.go:117] "RemoveContainer" containerID="b1145a8657c190281d59f285155c027261970047b2f53b386698d5ecf7ba40a8" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.985754 5113 scope.go:117] "RemoveContainer" containerID="8e8a1a19be6b633f27599a36fe22baef372b911b8407002bee3fd735b9222b38" Dec 12 14:40:09 crc kubenswrapper[5113]: E1212 14:40:09.986175 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e8a1a19be6b633f27599a36fe22baef372b911b8407002bee3fd735b9222b38\": container with ID starting with 8e8a1a19be6b633f27599a36fe22baef372b911b8407002bee3fd735b9222b38 not found: ID does not exist" containerID="8e8a1a19be6b633f27599a36fe22baef372b911b8407002bee3fd735b9222b38" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.986218 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8a1a19be6b633f27599a36fe22baef372b911b8407002bee3fd735b9222b38"} err="failed to get container status \"8e8a1a19be6b633f27599a36fe22baef372b911b8407002bee3fd735b9222b38\": rpc error: code = NotFound desc = could not find container \"8e8a1a19be6b633f27599a36fe22baef372b911b8407002bee3fd735b9222b38\": container with ID starting with 8e8a1a19be6b633f27599a36fe22baef372b911b8407002bee3fd735b9222b38 not found: ID does not exist" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.986246 5113 scope.go:117] "RemoveContainer" containerID="b1145a8657c190281d59f285155c027261970047b2f53b386698d5ecf7ba40a8" Dec 12 14:40:09 crc kubenswrapper[5113]: E1212 14:40:09.986718 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1145a8657c190281d59f285155c027261970047b2f53b386698d5ecf7ba40a8\": container with ID starting with b1145a8657c190281d59f285155c027261970047b2f53b386698d5ecf7ba40a8 not found: ID does not exist" containerID="b1145a8657c190281d59f285155c027261970047b2f53b386698d5ecf7ba40a8" Dec 12 14:40:09 crc kubenswrapper[5113]: I1212 14:40:09.986758 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1145a8657c190281d59f285155c027261970047b2f53b386698d5ecf7ba40a8"} err="failed to get container status \"b1145a8657c190281d59f285155c027261970047b2f53b386698d5ecf7ba40a8\": rpc error: code = NotFound desc = could not find container \"b1145a8657c190281d59f285155c027261970047b2f53b386698d5ecf7ba40a8\": container with ID starting with b1145a8657c190281d59f285155c027261970047b2f53b386698d5ecf7ba40a8 not found: ID does not exist" Dec 12 14:40:11 crc kubenswrapper[5113]: I1212 14:40:11.490728 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31153298-d269-4ccf-a13f-364f7f59f617" path="/var/lib/kubelet/pods/31153298-d269-4ccf-a13f-364f7f59f617/volumes" Dec 12 14:40:14 crc kubenswrapper[5113]: I1212 14:40:14.483250 5113 scope.go:117] "RemoveContainer" containerID="64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" Dec 12 14:40:14 crc kubenswrapper[5113]: E1212 14:40:14.484042 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dn52_openshift-machine-config-operator(5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" Dec 12 14:40:19 crc kubenswrapper[5113]: E1212 14:40:19.483680 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:40:20 crc kubenswrapper[5113]: I1212 14:40:20.168505 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hnmf9_f61630ce-4572-40eb-b245-937168ad79d4/kube-multus/0.log" Dec 12 14:40:20 crc kubenswrapper[5113]: I1212 14:40:20.169501 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hnmf9_f61630ce-4572-40eb-b245-937168ad79d4/kube-multus/0.log" Dec 12 14:40:20 crc kubenswrapper[5113]: I1212 14:40:20.173838 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:40:20 crc kubenswrapper[5113]: I1212 14:40:20.174043 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 14:40:23 crc kubenswrapper[5113]: E1212 14:40:23.483960 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:40:29 crc kubenswrapper[5113]: I1212 14:40:29.483326 5113 scope.go:117] "RemoveContainer" containerID="64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" Dec 12 14:40:29 crc kubenswrapper[5113]: E1212 14:40:29.484913 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dn52_openshift-machine-config-operator(5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" Dec 12 14:40:34 crc kubenswrapper[5113]: E1212 14:40:34.483830 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:40:35 crc kubenswrapper[5113]: E1212 14:40:35.483918 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:40:41 crc kubenswrapper[5113]: I1212 14:40:41.485970 5113 scope.go:117] "RemoveContainer" containerID="64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" Dec 12 14:40:41 crc kubenswrapper[5113]: E1212 14:40:41.488339 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dn52_openshift-machine-config-operator(5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" Dec 12 14:40:46 crc kubenswrapper[5113]: E1212 14:40:46.483283 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:40:48 crc kubenswrapper[5113]: E1212 14:40:48.483852 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:40:54 crc kubenswrapper[5113]: I1212 14:40:54.482911 5113 scope.go:117] "RemoveContainer" containerID="64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" Dec 12 14:40:54 crc kubenswrapper[5113]: E1212 14:40:54.483681 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dn52_openshift-machine-config-operator(5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" Dec 12 14:41:00 crc kubenswrapper[5113]: E1212 14:41:00.482659 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:41:02 crc kubenswrapper[5113]: E1212 14:41:02.483684 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:41:06 crc kubenswrapper[5113]: I1212 14:41:06.483436 5113 scope.go:117] "RemoveContainer" containerID="64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" Dec 12 14:41:06 crc kubenswrapper[5113]: E1212 14:41:06.483946 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dn52_openshift-machine-config-operator(5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" Dec 12 14:41:14 crc kubenswrapper[5113]: E1212 14:41:14.483630 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:41:16 crc kubenswrapper[5113]: E1212 14:41:16.483182 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:41:18 crc kubenswrapper[5113]: I1212 14:41:18.482462 5113 scope.go:117] "RemoveContainer" containerID="64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" Dec 12 14:41:18 crc kubenswrapper[5113]: E1212 14:41:18.482800 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dn52_openshift-machine-config-operator(5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" Dec 12 14:41:27 crc kubenswrapper[5113]: E1212 14:41:27.487788 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:41:28 crc kubenswrapper[5113]: E1212 14:41:28.483755 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:41:30 crc kubenswrapper[5113]: I1212 14:41:30.484299 5113 scope.go:117] "RemoveContainer" containerID="64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" Dec 12 14:41:30 crc kubenswrapper[5113]: E1212 14:41:30.484749 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dn52_openshift-machine-config-operator(5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" Dec 12 14:41:40 crc kubenswrapper[5113]: E1212 14:41:40.483607 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:41:41 crc kubenswrapper[5113]: I1212 14:41:41.483245 5113 scope.go:117] "RemoveContainer" containerID="64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" Dec 12 14:41:41 crc kubenswrapper[5113]: E1212 14:41:41.483648 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dn52_openshift-machine-config-operator(5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" Dec 12 14:41:42 crc kubenswrapper[5113]: E1212 14:41:42.483644 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:41:54 crc kubenswrapper[5113]: E1212 14:41:54.484634 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:41:56 crc kubenswrapper[5113]: I1212 14:41:56.482601 5113 scope.go:117] "RemoveContainer" containerID="64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" Dec 12 14:41:56 crc kubenswrapper[5113]: E1212 14:41:56.483042 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dn52_openshift-machine-config-operator(5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" Dec 12 14:41:56 crc kubenswrapper[5113]: E1212 14:41:56.483069 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:42:02 crc kubenswrapper[5113]: E1212 14:42:02.574967 5113 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 12 14:42:06 crc kubenswrapper[5113]: I1212 14:42:06.755021 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 14:42:06 crc kubenswrapper[5113]: I1212 14:42:06.765300 5113 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 14:42:06 crc kubenswrapper[5113]: I1212 14:42:06.794696 5113 ???:1] "http: TLS handshake error from 192.168.126.11:60278: no serving certificate available for the kubelet" Dec 12 14:42:06 crc kubenswrapper[5113]: I1212 14:42:06.833200 5113 ???:1] "http: TLS handshake error from 192.168.126.11:60284: no serving certificate available for the kubelet" Dec 12 14:42:06 crc kubenswrapper[5113]: I1212 14:42:06.868158 5113 ???:1] "http: TLS handshake error from 192.168.126.11:60288: no serving certificate available for the kubelet" Dec 12 14:42:06 crc kubenswrapper[5113]: I1212 14:42:06.909600 5113 ???:1] "http: TLS handshake error from 192.168.126.11:60304: no serving certificate available for the kubelet" Dec 12 14:42:06 crc kubenswrapper[5113]: I1212 14:42:06.970396 5113 ???:1] "http: TLS handshake error from 192.168.126.11:60306: no serving certificate available for the kubelet" Dec 12 14:42:07 crc kubenswrapper[5113]: I1212 14:42:07.082128 5113 ???:1] "http: TLS handshake error from 192.168.126.11:60310: no serving certificate available for the kubelet" Dec 12 14:42:07 crc kubenswrapper[5113]: I1212 14:42:07.264255 5113 ???:1] "http: TLS handshake error from 192.168.126.11:60318: no serving certificate available for the kubelet" Dec 12 14:42:07 crc kubenswrapper[5113]: I1212 14:42:07.619716 5113 ???:1] "http: TLS handshake error from 192.168.126.11:60330: no serving certificate available for the kubelet" Dec 12 14:42:08 crc kubenswrapper[5113]: I1212 14:42:08.301628 5113 ???:1] "http: TLS handshake error from 192.168.126.11:60336: no serving certificate available for the kubelet" Dec 12 14:42:08 crc kubenswrapper[5113]: E1212 14:42:08.483788 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-wpxz5" podUID="208dda3c-f1f0-4b82-9f0c-12464184846e" Dec 12 14:42:09 crc kubenswrapper[5113]: I1212 14:42:09.608539 5113 ???:1] "http: TLS handshake error from 192.168.126.11:60342: no serving certificate available for the kubelet" Dec 12 14:42:10 crc kubenswrapper[5113]: I1212 14:42:10.482738 5113 scope.go:117] "RemoveContainer" containerID="64340c4668bc03faf4ffefcdab240684c7dc4e34af42efb171a91e167d2b5449" Dec 12 14:42:10 crc kubenswrapper[5113]: E1212 14:42:10.483115 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dn52_openshift-machine-config-operator(5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dn52" podUID="5dc3b5c9-b0dc-4a41-9b40-e7367e1e6f68" Dec 12 14:42:11 crc kubenswrapper[5113]: E1212 14:42:11.483725 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hgmfl" podUID="aec662d5-147a-4efb-ac69-80a0fc01a91e" Dec 12 14:42:12 crc kubenswrapper[5113]: I1212 14:42:12.194194 5113 ???:1] "http: TLS handshake error from 192.168.126.11:60346: no serving certificate available for the kubelet" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515117024712024445 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015117024713017363 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015117020510016475 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015117020511015446 5ustar corecore